00:00:00.001 Started by upstream project "autotest-per-patch" build number 132808 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.087 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.088 The recommended git tool is: git 00:00:00.088 using credential 00000000-0000-0000-0000-000000000002 00:00:00.090 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.136 Fetching changes from the remote Git repository 00:00:00.138 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.176 Using shallow fetch with depth 1 00:00:00.176 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.176 > git --version # timeout=10 00:00:00.205 > git --version # 'git version 2.39.2' 00:00:00.205 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.229 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.229 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.024 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.035 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.047 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.047 > git config core.sparsecheckout # timeout=10 00:00:07.058 > git read-tree -mu HEAD # timeout=10 00:00:07.074 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.099 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.099 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.201 [Pipeline] Start of Pipeline 00:00:07.212 [Pipeline] library 00:00:07.213 Loading library shm_lib@master 00:00:07.213 Library shm_lib@master is cached. Copying from home. 00:00:07.228 [Pipeline] node 00:00:07.237 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.238 [Pipeline] { 00:00:07.246 [Pipeline] catchError 00:00:07.247 [Pipeline] { 00:00:07.257 [Pipeline] wrap 00:00:07.266 [Pipeline] { 00:00:07.272 [Pipeline] stage 00:00:07.274 [Pipeline] { (Prologue) 00:00:07.482 [Pipeline] sh 00:00:07.824 + logger -p user.info -t JENKINS-CI 00:00:07.862 [Pipeline] echo 00:00:07.867 Node: GP11 00:00:07.875 [Pipeline] sh 00:00:08.188 [Pipeline] setCustomBuildProperty 00:00:08.199 [Pipeline] echo 00:00:08.200 Cleanup processes 00:00:08.205 [Pipeline] sh 00:00:08.493 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.493 1267018 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.506 [Pipeline] sh 00:00:08.790 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.790 ++ awk '{print $1}' 00:00:08.790 ++ grep -v 'sudo pgrep' 00:00:08.790 + sudo kill -9 00:00:08.790 + true 00:00:08.829 [Pipeline] cleanWs 00:00:08.839 [WS-CLEANUP] Deleting project workspace... 00:00:08.839 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.848 [WS-CLEANUP] done 00:00:08.852 [Pipeline] setCustomBuildProperty 00:00:08.865 [Pipeline] sh 00:00:09.149 + sudo git config --global --replace-all safe.directory '*' 00:00:09.250 [Pipeline] httpRequest 00:00:09.924 [Pipeline] echo 00:00:09.926 Sorcerer 10.211.164.112 is alive 00:00:09.935 [Pipeline] retry 00:00:09.937 [Pipeline] { 00:00:09.951 [Pipeline] httpRequest 00:00:09.955 HttpMethod: GET 00:00:09.956 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.957 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.975 Response Code: HTTP/1.1 200 OK 00:00:09.975 Success: Status code 200 is in the accepted range: 200,404 00:00:09.975 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.910 [Pipeline] } 00:00:14.929 [Pipeline] // retry 00:00:14.936 [Pipeline] sh 00:00:15.223 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.239 [Pipeline] httpRequest 00:00:15.639 [Pipeline] echo 00:00:15.641 Sorcerer 10.211.164.112 is alive 00:00:15.651 [Pipeline] retry 00:00:15.654 [Pipeline] { 00:00:15.668 [Pipeline] httpRequest 00:00:15.673 HttpMethod: GET 00:00:15.674 URL: http://10.211.164.112/packages/spdk_9237e57ed842482801130dac37a326b57cf6f2cc.tar.gz 00:00:15.675 Sending request to url: http://10.211.164.112/packages/spdk_9237e57ed842482801130dac37a326b57cf6f2cc.tar.gz 00:00:15.701 Response Code: HTTP/1.1 200 OK 00:00:15.701 Success: Status code 200 is in the accepted range: 200,404 00:00:15.702 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_9237e57ed842482801130dac37a326b57cf6f2cc.tar.gz 00:01:11.122 [Pipeline] } 00:01:11.141 [Pipeline] // retry 00:01:11.148 [Pipeline] sh 00:01:11.437 + tar --no-same-owner -xf spdk_9237e57ed842482801130dac37a326b57cf6f2cc.tar.gz 00:01:14.735 [Pipeline] sh 00:01:15.024 + git -C spdk log --oneline -n5 00:01:15.024 9237e57ed test/check_so_deps: use VERSION to look for prior tags 00:01:15.024 6584139bf build: use VERSION file for storing version 00:01:15.024 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:01:15.024 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:01:15.024 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:01:15.036 [Pipeline] } 00:01:15.050 [Pipeline] // stage 00:01:15.059 [Pipeline] stage 00:01:15.061 [Pipeline] { (Prepare) 00:01:15.077 [Pipeline] writeFile 00:01:15.093 [Pipeline] sh 00:01:15.380 + logger -p user.info -t JENKINS-CI 00:01:15.393 [Pipeline] sh 00:01:15.675 + logger -p user.info -t JENKINS-CI 00:01:15.687 [Pipeline] sh 00:01:15.974 + cat autorun-spdk.conf 00:01:15.975 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.975 SPDK_TEST_NVMF=1 00:01:15.975 SPDK_TEST_NVME_CLI=1 00:01:15.975 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.975 SPDK_TEST_NVMF_NICS=e810 00:01:15.975 SPDK_TEST_VFIOUSER=1 00:01:15.975 SPDK_RUN_UBSAN=1 00:01:15.975 NET_TYPE=phy 00:01:15.983 RUN_NIGHTLY=0 00:01:15.987 [Pipeline] readFile 00:01:16.011 [Pipeline] withEnv 00:01:16.014 [Pipeline] { 00:01:16.026 [Pipeline] sh 00:01:16.314 + set -ex 00:01:16.314 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:16.314 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:16.314 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.314 ++ SPDK_TEST_NVMF=1 00:01:16.314 ++ SPDK_TEST_NVME_CLI=1 00:01:16.314 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.314 ++ SPDK_TEST_NVMF_NICS=e810 00:01:16.314 ++ SPDK_TEST_VFIOUSER=1 00:01:16.314 ++ SPDK_RUN_UBSAN=1 00:01:16.314 ++ NET_TYPE=phy 00:01:16.314 ++ RUN_NIGHTLY=0 00:01:16.314 + case $SPDK_TEST_NVMF_NICS in 00:01:16.314 + DRIVERS=ice 00:01:16.314 + [[ tcp == \r\d\m\a ]] 00:01:16.314 + [[ -n ice ]] 00:01:16.314 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:16.314 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:16.314 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:16.314 rmmod: ERROR: Module irdma is not currently loaded 00:01:16.314 rmmod: ERROR: Module i40iw is not currently loaded 00:01:16.314 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:16.314 + true 00:01:16.314 + for D in $DRIVERS 00:01:16.314 + sudo modprobe ice 00:01:16.314 + exit 0 00:01:16.325 [Pipeline] } 00:01:16.339 [Pipeline] // withEnv 00:01:16.345 [Pipeline] } 00:01:16.359 [Pipeline] // stage 00:01:16.368 [Pipeline] catchError 00:01:16.370 [Pipeline] { 00:01:16.384 [Pipeline] timeout 00:01:16.384 Timeout set to expire in 1 hr 0 min 00:01:16.385 [Pipeline] { 00:01:16.400 [Pipeline] stage 00:01:16.402 [Pipeline] { (Tests) 00:01:16.417 [Pipeline] sh 00:01:16.703 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.704 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.704 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.704 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:16.704 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.704 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:16.704 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:16.704 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:16.704 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:16.704 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:16.704 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:16.704 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.704 + source /etc/os-release 00:01:16.704 ++ NAME='Fedora Linux' 00:01:16.704 ++ VERSION='39 (Cloud Edition)' 00:01:16.704 ++ ID=fedora 00:01:16.704 ++ VERSION_ID=39 00:01:16.704 ++ VERSION_CODENAME= 00:01:16.704 ++ PLATFORM_ID=platform:f39 00:01:16.704 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:16.704 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:16.704 ++ LOGO=fedora-logo-icon 00:01:16.704 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:16.704 ++ HOME_URL=https://fedoraproject.org/ 00:01:16.704 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:16.704 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:16.704 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:16.704 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:16.704 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:16.704 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:16.704 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:16.704 ++ SUPPORT_END=2024-11-12 00:01:16.704 ++ VARIANT='Cloud Edition' 00:01:16.704 ++ VARIANT_ID=cloud 00:01:16.704 + uname -a 00:01:16.704 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:16.704 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:17.645 Hugepages 00:01:17.645 node hugesize free / total 00:01:17.645 node0 1048576kB 0 / 0 00:01:17.645 node0 2048kB 0 / 0 00:01:17.645 node1 1048576kB 0 / 0 00:01:17.645 node1 2048kB 0 / 0 00:01:17.645 00:01:17.645 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:17.645 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:17.645 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:17.645 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:17.645 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:17.645 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:17.645 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:17.645 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:17.645 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:17.645 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:17.645 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:17.645 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:17.645 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:17.645 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:17.645 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:17.645 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:17.645 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:17.905 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:17.905 + rm -f /tmp/spdk-ld-path 00:01:17.905 + source autorun-spdk.conf 00:01:17.905 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.905 ++ SPDK_TEST_NVMF=1 00:01:17.905 ++ SPDK_TEST_NVME_CLI=1 00:01:17.905 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:17.905 ++ SPDK_TEST_NVMF_NICS=e810 00:01:17.905 ++ SPDK_TEST_VFIOUSER=1 00:01:17.905 ++ SPDK_RUN_UBSAN=1 00:01:17.905 ++ NET_TYPE=phy 00:01:17.905 ++ RUN_NIGHTLY=0 00:01:17.905 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:17.905 + [[ -n '' ]] 00:01:17.905 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:17.905 + for M in /var/spdk/build-*-manifest.txt 00:01:17.905 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:17.905 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:17.905 + for M in /var/spdk/build-*-manifest.txt 00:01:17.905 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:17.905 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:17.905 + for M in /var/spdk/build-*-manifest.txt 00:01:17.905 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:17.905 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:17.905 ++ uname 00:01:17.905 + [[ Linux == \L\i\n\u\x ]] 00:01:17.905 + sudo dmesg -T 00:01:17.905 + sudo dmesg --clear 00:01:17.905 + dmesg_pid=1267696 00:01:17.905 + [[ Fedora Linux == FreeBSD ]] 00:01:17.905 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:17.905 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:17.905 + sudo dmesg -Tw 00:01:17.905 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:17.905 + [[ -x /usr/src/fio-static/fio ]] 00:01:17.905 + export FIO_BIN=/usr/src/fio-static/fio 00:01:17.905 + FIO_BIN=/usr/src/fio-static/fio 00:01:17.905 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:17.905 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:17.905 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:17.905 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:17.905 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:17.905 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:17.905 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:17.905 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:17.905 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:17.905 17:50:40 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:17.905 17:50:40 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:17.905 17:50:40 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.905 17:50:40 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:17.905 17:50:40 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:17.905 17:50:40 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:17.905 17:50:40 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:17.905 17:50:40 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:17.905 17:50:40 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:17.905 17:50:40 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:17.905 17:50:40 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:17.905 17:50:40 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:17.905 17:50:40 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:17.905 17:50:40 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:17.905 17:50:40 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:17.905 17:50:40 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:17.905 17:50:40 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:17.905 17:50:40 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:17.905 17:50:40 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:17.905 17:50:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.905 17:50:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.905 17:50:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.905 17:50:40 -- paths/export.sh@5 -- $ export PATH 00:01:17.905 17:50:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.905 17:50:40 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:17.905 17:50:40 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:17.905 17:50:40 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733763040.XXXXXX 00:01:17.905 17:50:40 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733763040.HvqPug 00:01:17.905 17:50:40 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:17.905 17:50:40 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:17.905 17:50:40 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:17.905 17:50:40 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:17.905 17:50:40 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:17.905 17:50:40 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:17.905 17:50:40 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:17.905 17:50:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.905 17:50:40 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:17.905 17:50:40 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:17.905 17:50:40 -- pm/common@17 -- $ local monitor 00:01:17.905 17:50:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:17.905 17:50:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:17.905 17:50:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:17.905 17:50:40 -- pm/common@21 -- $ date +%s 00:01:17.905 17:50:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:17.905 17:50:40 -- pm/common@21 -- $ date +%s 00:01:17.905 17:50:40 -- pm/common@25 -- $ sleep 1 00:01:17.905 17:50:40 -- pm/common@21 -- $ date +%s 00:01:17.905 17:50:40 -- pm/common@21 -- $ date +%s 00:01:17.905 17:50:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733763040 00:01:17.905 17:50:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733763040 00:01:17.905 17:50:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733763040 00:01:17.905 17:50:40 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733763040 00:01:18.166 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733763040_collect-vmstat.pm.log 00:01:18.166 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733763040_collect-cpu-load.pm.log 00:01:18.166 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733763040_collect-cpu-temp.pm.log 00:01:18.166 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733763040_collect-bmc-pm.bmc.pm.log 00:01:18.166 Traceback (most recent call last): 00:01:18.166 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py", line 24, in 00:01:18.166 import spdk.rpc as rpc # noqa 00:01:18.166 ^^^^^^^^^^^^^^^^^^^^^^ 00:01:18.166 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python/spdk/__init__.py", line 5, in 00:01:18.166 from .version import __version__ 00:01:18.166 ModuleNotFoundError: No module named 'spdk.version' 00:01:18.166 Traceback (most recent call last): 00:01:18.166 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py", line 24, in 00:01:18.166 import spdk.rpc as rpc # noqa 00:01:18.166 ^^^^^^^^^^^^^^^^^^^^^^ 00:01:18.166 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python/spdk/__init__.py", line 5, in 00:01:18.166 from .version import __version__ 00:01:18.166 ModuleNotFoundError: No module named 'spdk.version' 00:01:19.106 17:50:41 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:19.106 17:50:41 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:19.106 17:50:41 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:19.106 17:50:41 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:19.106 17:50:41 -- spdk/autobuild.sh@16 -- $ date -u 00:01:19.106 Mon Dec 9 04:50:41 PM UTC 2024 00:01:19.106 17:50:41 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:19.106 v25.01-pre-305-g9237e57ed 00:01:19.106 17:50:41 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:19.106 17:50:41 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:19.106 17:50:41 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:19.106 17:50:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:19.106 17:50:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:19.106 17:50:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.106 ************************************ 00:01:19.106 START TEST ubsan 00:01:19.106 ************************************ 00:01:19.106 17:50:41 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:19.106 using ubsan 00:01:19.106 00:01:19.106 real 0m0.000s 00:01:19.106 user 0m0.000s 00:01:19.106 sys 0m0.000s 00:01:19.106 17:50:41 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:19.106 17:50:41 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:19.106 ************************************ 00:01:19.106 END TEST ubsan 00:01:19.106 ************************************ 00:01:19.106 17:50:42 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:19.106 17:50:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:19.106 17:50:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:19.106 17:50:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:19.106 17:50:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:19.106 17:50:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:19.106 17:50:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:19.106 17:50:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:19.106 17:50:42 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:19.106 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:19.106 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:19.366 Using 'verbs' RDMA provider 00:01:30.313 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:40.291 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:40.291 Creating mk/config.mk...done. 00:01:40.291 Creating mk/cc.flags.mk...done. 00:01:40.291 Type 'make' to build. 00:01:40.291 17:51:03 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:01:40.291 17:51:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:40.291 17:51:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:40.291 17:51:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.291 ************************************ 00:01:40.291 START TEST make 00:01:40.291 ************************************ 00:01:40.291 17:51:03 make -- common/autotest_common.sh@1129 -- $ make -j48 00:01:42.221 The Meson build system 00:01:42.221 Version: 1.5.0 00:01:42.221 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:42.221 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:42.221 Build type: native build 00:01:42.221 Project name: libvfio-user 00:01:42.221 Project version: 0.0.1 00:01:42.221 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:42.221 C linker for the host machine: cc ld.bfd 2.40-14 00:01:42.221 Host machine cpu family: x86_64 00:01:42.221 Host machine cpu: x86_64 00:01:42.221 Run-time dependency threads found: YES 00:01:42.221 Library dl found: YES 00:01:42.221 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:42.221 Run-time dependency json-c found: YES 0.17 00:01:42.221 Run-time dependency cmocka found: YES 1.1.7 00:01:42.221 Program pytest-3 found: NO 00:01:42.221 Program flake8 found: NO 00:01:42.221 Program misspell-fixer found: NO 00:01:42.221 Program restructuredtext-lint found: NO 00:01:42.221 Program valgrind found: YES (/usr/bin/valgrind) 00:01:42.221 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:42.221 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:42.221 Compiler for C supports arguments -Wwrite-strings: YES 00:01:42.221 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:42.221 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:42.221 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:42.221 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:42.221 Build targets in project: 8 00:01:42.221 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:42.221 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:42.221 00:01:42.221 libvfio-user 0.0.1 00:01:42.221 00:01:42.221 User defined options 00:01:42.221 buildtype : debug 00:01:42.221 default_library: shared 00:01:42.221 libdir : /usr/local/lib 00:01:42.221 00:01:42.221 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:43.168 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:43.434 [1/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:43.434 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:43.434 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:43.434 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:43.434 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:43.434 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:43.434 [7/37] Compiling C object samples/null.p/null.c.o 00:01:43.434 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:43.434 [9/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:43.434 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:43.434 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:43.434 [12/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:43.434 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:43.434 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:43.434 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:43.434 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:43.434 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:43.434 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:43.434 [19/37] Compiling C object samples/server.p/server.c.o 00:01:43.434 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:43.434 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:43.434 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:43.434 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:43.434 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:43.696 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:43.696 [26/37] Compiling C object samples/client.p/client.c.o 00:01:43.696 [27/37] Linking target samples/client 00:01:43.696 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:43.696 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:43.696 [30/37] Linking target test/unit_tests 00:01:43.696 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:43.967 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:43.967 [33/37] Linking target samples/server 00:01:43.967 [34/37] Linking target samples/lspci 00:01:43.967 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:43.967 [36/37] Linking target samples/null 00:01:43.967 [37/37] Linking target samples/gpio-pci-idio-16 00:01:43.967 INFO: autodetecting backend as ninja 00:01:43.967 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:44.233 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:45.175 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:45.175 ninja: no work to do. 00:01:50.457 The Meson build system 00:01:50.457 Version: 1.5.0 00:01:50.457 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:50.457 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:50.457 Build type: native build 00:01:50.457 Program cat found: YES (/usr/bin/cat) 00:01:50.457 Project name: DPDK 00:01:50.457 Project version: 24.03.0 00:01:50.457 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:50.457 C linker for the host machine: cc ld.bfd 2.40-14 00:01:50.457 Host machine cpu family: x86_64 00:01:50.457 Host machine cpu: x86_64 00:01:50.457 Message: ## Building in Developer Mode ## 00:01:50.457 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:50.457 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:50.457 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:50.457 Program python3 found: YES (/usr/bin/python3) 00:01:50.457 Program cat found: YES (/usr/bin/cat) 00:01:50.457 Compiler for C supports arguments -march=native: YES 00:01:50.457 Checking for size of "void *" : 8 00:01:50.457 Checking for size of "void *" : 8 (cached) 00:01:50.457 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:50.457 Library m found: YES 00:01:50.457 Library numa found: YES 00:01:50.457 Has header "numaif.h" : YES 00:01:50.457 Library fdt found: NO 00:01:50.457 Library execinfo found: NO 00:01:50.457 Has header "execinfo.h" : YES 00:01:50.457 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:50.457 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:50.457 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:50.457 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:50.457 Run-time dependency openssl found: YES 3.1.1 00:01:50.457 Run-time dependency libpcap found: YES 1.10.4 00:01:50.457 Has header "pcap.h" with dependency libpcap: YES 00:01:50.457 Compiler for C supports arguments -Wcast-qual: YES 00:01:50.457 Compiler for C supports arguments -Wdeprecated: YES 00:01:50.457 Compiler for C supports arguments -Wformat: YES 00:01:50.457 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:50.457 Compiler for C supports arguments -Wformat-security: NO 00:01:50.457 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:50.457 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:50.457 Compiler for C supports arguments -Wnested-externs: YES 00:01:50.457 Compiler for C supports arguments -Wold-style-definition: YES 00:01:50.457 Compiler for C supports arguments -Wpointer-arith: YES 00:01:50.457 Compiler for C supports arguments -Wsign-compare: YES 00:01:50.457 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:50.457 Compiler for C supports arguments -Wundef: YES 00:01:50.458 Compiler for C supports arguments -Wwrite-strings: YES 00:01:50.458 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:50.458 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:50.458 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:50.458 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:50.458 Program objdump found: YES (/usr/bin/objdump) 00:01:50.458 Compiler for C supports arguments -mavx512f: YES 00:01:50.458 Checking if "AVX512 checking" compiles: YES 00:01:50.458 Fetching value of define "__SSE4_2__" : 1 00:01:50.458 Fetching value of define "__AES__" : 1 00:01:50.458 Fetching value of define "__AVX__" : 1 00:01:50.458 Fetching value of define "__AVX2__" : (undefined) 00:01:50.458 Fetching value of define "__AVX512BW__" : (undefined) 00:01:50.458 Fetching value of define "__AVX512CD__" : (undefined) 00:01:50.458 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:50.458 Fetching value of define "__AVX512F__" : (undefined) 00:01:50.458 Fetching value of define "__AVX512VL__" : (undefined) 00:01:50.458 Fetching value of define "__PCLMUL__" : 1 00:01:50.458 Fetching value of define "__RDRND__" : 1 00:01:50.458 Fetching value of define "__RDSEED__" : (undefined) 00:01:50.458 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:50.458 Fetching value of define "__znver1__" : (undefined) 00:01:50.458 Fetching value of define "__znver2__" : (undefined) 00:01:50.458 Fetching value of define "__znver3__" : (undefined) 00:01:50.458 Fetching value of define "__znver4__" : (undefined) 00:01:50.458 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:50.458 Message: lib/log: Defining dependency "log" 00:01:50.458 Message: lib/kvargs: Defining dependency "kvargs" 00:01:50.458 Message: lib/telemetry: Defining dependency "telemetry" 00:01:50.458 Checking for function "getentropy" : NO 00:01:50.458 Message: lib/eal: Defining dependency "eal" 00:01:50.458 Message: lib/ring: Defining dependency "ring" 00:01:50.458 Message: lib/rcu: Defining dependency "rcu" 00:01:50.458 Message: lib/mempool: Defining dependency "mempool" 00:01:50.458 Message: lib/mbuf: Defining dependency "mbuf" 00:01:50.458 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:50.458 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:50.458 Compiler for C supports arguments -mpclmul: YES 00:01:50.458 Compiler for C supports arguments -maes: YES 00:01:50.458 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:50.458 Compiler for C supports arguments -mavx512bw: YES 00:01:50.458 Compiler for C supports arguments -mavx512dq: YES 00:01:50.458 Compiler for C supports arguments -mavx512vl: YES 00:01:50.458 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:50.458 Compiler for C supports arguments -mavx2: YES 00:01:50.458 Compiler for C supports arguments -mavx: YES 00:01:50.458 Message: lib/net: Defining dependency "net" 00:01:50.458 Message: lib/meter: Defining dependency "meter" 00:01:50.458 Message: lib/ethdev: Defining dependency "ethdev" 00:01:50.458 Message: lib/pci: Defining dependency "pci" 00:01:50.458 Message: lib/cmdline: Defining dependency "cmdline" 00:01:50.458 Message: lib/hash: Defining dependency "hash" 00:01:50.458 Message: lib/timer: Defining dependency "timer" 00:01:50.458 Message: lib/compressdev: Defining dependency "compressdev" 00:01:50.458 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:50.458 Message: lib/dmadev: Defining dependency "dmadev" 00:01:50.458 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:50.458 Message: lib/power: Defining dependency "power" 00:01:50.458 Message: lib/reorder: Defining dependency "reorder" 00:01:50.458 Message: lib/security: Defining dependency "security" 00:01:50.458 Has header "linux/userfaultfd.h" : YES 00:01:50.458 Has header "linux/vduse.h" : YES 00:01:50.458 Message: lib/vhost: Defining dependency "vhost" 00:01:50.458 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:50.458 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:50.458 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:50.458 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:50.458 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:50.458 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:50.458 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:50.458 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:50.458 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:50.458 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:50.458 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:50.458 Configuring doxy-api-html.conf using configuration 00:01:50.458 Configuring doxy-api-man.conf using configuration 00:01:50.458 Program mandb found: YES (/usr/bin/mandb) 00:01:50.458 Program sphinx-build found: NO 00:01:50.458 Configuring rte_build_config.h using configuration 00:01:50.458 Message: 00:01:50.458 ================= 00:01:50.458 Applications Enabled 00:01:50.458 ================= 00:01:50.458 00:01:50.458 apps: 00:01:50.458 00:01:50.458 00:01:50.458 Message: 00:01:50.458 ================= 00:01:50.458 Libraries Enabled 00:01:50.458 ================= 00:01:50.458 00:01:50.458 libs: 00:01:50.458 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:50.458 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:50.458 cryptodev, dmadev, power, reorder, security, vhost, 00:01:50.458 00:01:50.458 Message: 00:01:50.458 =============== 00:01:50.458 Drivers Enabled 00:01:50.458 =============== 00:01:50.458 00:01:50.458 common: 00:01:50.458 00:01:50.458 bus: 00:01:50.458 pci, vdev, 00:01:50.458 mempool: 00:01:50.458 ring, 00:01:50.458 dma: 00:01:50.458 00:01:50.458 net: 00:01:50.458 00:01:50.458 crypto: 00:01:50.458 00:01:50.458 compress: 00:01:50.458 00:01:50.458 vdpa: 00:01:50.458 00:01:50.458 00:01:50.458 Message: 00:01:50.458 ================= 00:01:50.458 Content Skipped 00:01:50.458 ================= 00:01:50.458 00:01:50.458 apps: 00:01:50.458 dumpcap: explicitly disabled via build config 00:01:50.458 graph: explicitly disabled via build config 00:01:50.458 pdump: explicitly disabled via build config 00:01:50.458 proc-info: explicitly disabled via build config 00:01:50.458 test-acl: explicitly disabled via build config 00:01:50.458 test-bbdev: explicitly disabled via build config 00:01:50.458 test-cmdline: explicitly disabled via build config 00:01:50.458 test-compress-perf: explicitly disabled via build config 00:01:50.458 test-crypto-perf: explicitly disabled via build config 00:01:50.458 test-dma-perf: explicitly disabled via build config 00:01:50.458 test-eventdev: explicitly disabled via build config 00:01:50.458 test-fib: explicitly disabled via build config 00:01:50.458 test-flow-perf: explicitly disabled via build config 00:01:50.458 test-gpudev: explicitly disabled via build config 00:01:50.458 test-mldev: explicitly disabled via build config 00:01:50.458 test-pipeline: explicitly disabled via build config 00:01:50.458 test-pmd: explicitly disabled via build config 00:01:50.458 test-regex: explicitly disabled via build config 00:01:50.458 test-sad: explicitly disabled via build config 00:01:50.458 test-security-perf: explicitly disabled via build config 00:01:50.458 00:01:50.458 libs: 00:01:50.458 argparse: explicitly disabled via build config 00:01:50.458 metrics: explicitly disabled via build config 00:01:50.458 acl: explicitly disabled via build config 00:01:50.458 bbdev: explicitly disabled via build config 00:01:50.458 bitratestats: explicitly disabled via build config 00:01:50.458 bpf: explicitly disabled via build config 00:01:50.458 cfgfile: explicitly disabled via build config 00:01:50.458 distributor: explicitly disabled via build config 00:01:50.458 efd: explicitly disabled via build config 00:01:50.458 eventdev: explicitly disabled via build config 00:01:50.458 dispatcher: explicitly disabled via build config 00:01:50.458 gpudev: explicitly disabled via build config 00:01:50.458 gro: explicitly disabled via build config 00:01:50.458 gso: explicitly disabled via build config 00:01:50.458 ip_frag: explicitly disabled via build config 00:01:50.458 jobstats: explicitly disabled via build config 00:01:50.458 latencystats: explicitly disabled via build config 00:01:50.458 lpm: explicitly disabled via build config 00:01:50.458 member: explicitly disabled via build config 00:01:50.458 pcapng: explicitly disabled via build config 00:01:50.458 rawdev: explicitly disabled via build config 00:01:50.458 regexdev: explicitly disabled via build config 00:01:50.458 mldev: explicitly disabled via build config 00:01:50.458 rib: explicitly disabled via build config 00:01:50.458 sched: explicitly disabled via build config 00:01:50.458 stack: explicitly disabled via build config 00:01:50.458 ipsec: explicitly disabled via build config 00:01:50.458 pdcp: explicitly disabled via build config 00:01:50.458 fib: explicitly disabled via build config 00:01:50.458 port: explicitly disabled via build config 00:01:50.458 pdump: explicitly disabled via build config 00:01:50.458 table: explicitly disabled via build config 00:01:50.458 pipeline: explicitly disabled via build config 00:01:50.458 graph: explicitly disabled via build config 00:01:50.458 node: explicitly disabled via build config 00:01:50.458 00:01:50.458 drivers: 00:01:50.458 common/cpt: not in enabled drivers build config 00:01:50.458 common/dpaax: not in enabled drivers build config 00:01:50.458 common/iavf: not in enabled drivers build config 00:01:50.458 common/idpf: not in enabled drivers build config 00:01:50.458 common/ionic: not in enabled drivers build config 00:01:50.458 common/mvep: not in enabled drivers build config 00:01:50.458 common/octeontx: not in enabled drivers build config 00:01:50.458 bus/auxiliary: not in enabled drivers build config 00:01:50.458 bus/cdx: not in enabled drivers build config 00:01:50.458 bus/dpaa: not in enabled drivers build config 00:01:50.458 bus/fslmc: not in enabled drivers build config 00:01:50.458 bus/ifpga: not in enabled drivers build config 00:01:50.458 bus/platform: not in enabled drivers build config 00:01:50.458 bus/uacce: not in enabled drivers build config 00:01:50.458 bus/vmbus: not in enabled drivers build config 00:01:50.458 common/cnxk: not in enabled drivers build config 00:01:50.458 common/mlx5: not in enabled drivers build config 00:01:50.458 common/nfp: not in enabled drivers build config 00:01:50.458 common/nitrox: not in enabled drivers build config 00:01:50.458 common/qat: not in enabled drivers build config 00:01:50.458 common/sfc_efx: not in enabled drivers build config 00:01:50.458 mempool/bucket: not in enabled drivers build config 00:01:50.458 mempool/cnxk: not in enabled drivers build config 00:01:50.458 mempool/dpaa: not in enabled drivers build config 00:01:50.458 mempool/dpaa2: not in enabled drivers build config 00:01:50.459 mempool/octeontx: not in enabled drivers build config 00:01:50.459 mempool/stack: not in enabled drivers build config 00:01:50.459 dma/cnxk: not in enabled drivers build config 00:01:50.459 dma/dpaa: not in enabled drivers build config 00:01:50.459 dma/dpaa2: not in enabled drivers build config 00:01:50.459 dma/hisilicon: not in enabled drivers build config 00:01:50.459 dma/idxd: not in enabled drivers build config 00:01:50.459 dma/ioat: not in enabled drivers build config 00:01:50.459 dma/skeleton: not in enabled drivers build config 00:01:50.459 net/af_packet: not in enabled drivers build config 00:01:50.459 net/af_xdp: not in enabled drivers build config 00:01:50.459 net/ark: not in enabled drivers build config 00:01:50.459 net/atlantic: not in enabled drivers build config 00:01:50.459 net/avp: not in enabled drivers build config 00:01:50.459 net/axgbe: not in enabled drivers build config 00:01:50.459 net/bnx2x: not in enabled drivers build config 00:01:50.459 net/bnxt: not in enabled drivers build config 00:01:50.459 net/bonding: not in enabled drivers build config 00:01:50.459 net/cnxk: not in enabled drivers build config 00:01:50.459 net/cpfl: not in enabled drivers build config 00:01:50.459 net/cxgbe: not in enabled drivers build config 00:01:50.459 net/dpaa: not in enabled drivers build config 00:01:50.459 net/dpaa2: not in enabled drivers build config 00:01:50.459 net/e1000: not in enabled drivers build config 00:01:50.459 net/ena: not in enabled drivers build config 00:01:50.459 net/enetc: not in enabled drivers build config 00:01:50.459 net/enetfec: not in enabled drivers build config 00:01:50.459 net/enic: not in enabled drivers build config 00:01:50.459 net/failsafe: not in enabled drivers build config 00:01:50.459 net/fm10k: not in enabled drivers build config 00:01:50.459 net/gve: not in enabled drivers build config 00:01:50.459 net/hinic: not in enabled drivers build config 00:01:50.459 net/hns3: not in enabled drivers build config 00:01:50.459 net/i40e: not in enabled drivers build config 00:01:50.459 net/iavf: not in enabled drivers build config 00:01:50.459 net/ice: not in enabled drivers build config 00:01:50.459 net/idpf: not in enabled drivers build config 00:01:50.459 net/igc: not in enabled drivers build config 00:01:50.459 net/ionic: not in enabled drivers build config 00:01:50.459 net/ipn3ke: not in enabled drivers build config 00:01:50.459 net/ixgbe: not in enabled drivers build config 00:01:50.459 net/mana: not in enabled drivers build config 00:01:50.459 net/memif: not in enabled drivers build config 00:01:50.459 net/mlx4: not in enabled drivers build config 00:01:50.459 net/mlx5: not in enabled drivers build config 00:01:50.459 net/mvneta: not in enabled drivers build config 00:01:50.459 net/mvpp2: not in enabled drivers build config 00:01:50.459 net/netvsc: not in enabled drivers build config 00:01:50.459 net/nfb: not in enabled drivers build config 00:01:50.459 net/nfp: not in enabled drivers build config 00:01:50.459 net/ngbe: not in enabled drivers build config 00:01:50.459 net/null: not in enabled drivers build config 00:01:50.459 net/octeontx: not in enabled drivers build config 00:01:50.459 net/octeon_ep: not in enabled drivers build config 00:01:50.459 net/pcap: not in enabled drivers build config 00:01:50.459 net/pfe: not in enabled drivers build config 00:01:50.459 net/qede: not in enabled drivers build config 00:01:50.459 net/ring: not in enabled drivers build config 00:01:50.459 net/sfc: not in enabled drivers build config 00:01:50.459 net/softnic: not in enabled drivers build config 00:01:50.459 net/tap: not in enabled drivers build config 00:01:50.459 net/thunderx: not in enabled drivers build config 00:01:50.459 net/txgbe: not in enabled drivers build config 00:01:50.459 net/vdev_netvsc: not in enabled drivers build config 00:01:50.459 net/vhost: not in enabled drivers build config 00:01:50.459 net/virtio: not in enabled drivers build config 00:01:50.459 net/vmxnet3: not in enabled drivers build config 00:01:50.459 raw/*: missing internal dependency, "rawdev" 00:01:50.459 crypto/armv8: not in enabled drivers build config 00:01:50.459 crypto/bcmfs: not in enabled drivers build config 00:01:50.459 crypto/caam_jr: not in enabled drivers build config 00:01:50.459 crypto/ccp: not in enabled drivers build config 00:01:50.459 crypto/cnxk: not in enabled drivers build config 00:01:50.459 crypto/dpaa_sec: not in enabled drivers build config 00:01:50.459 crypto/dpaa2_sec: not in enabled drivers build config 00:01:50.459 crypto/ipsec_mb: not in enabled drivers build config 00:01:50.459 crypto/mlx5: not in enabled drivers build config 00:01:50.459 crypto/mvsam: not in enabled drivers build config 00:01:50.459 crypto/nitrox: not in enabled drivers build config 00:01:50.459 crypto/null: not in enabled drivers build config 00:01:50.459 crypto/octeontx: not in enabled drivers build config 00:01:50.459 crypto/openssl: not in enabled drivers build config 00:01:50.459 crypto/scheduler: not in enabled drivers build config 00:01:50.459 crypto/uadk: not in enabled drivers build config 00:01:50.459 crypto/virtio: not in enabled drivers build config 00:01:50.459 compress/isal: not in enabled drivers build config 00:01:50.459 compress/mlx5: not in enabled drivers build config 00:01:50.459 compress/nitrox: not in enabled drivers build config 00:01:50.459 compress/octeontx: not in enabled drivers build config 00:01:50.459 compress/zlib: not in enabled drivers build config 00:01:50.459 regex/*: missing internal dependency, "regexdev" 00:01:50.459 ml/*: missing internal dependency, "mldev" 00:01:50.459 vdpa/ifc: not in enabled drivers build config 00:01:50.459 vdpa/mlx5: not in enabled drivers build config 00:01:50.459 vdpa/nfp: not in enabled drivers build config 00:01:50.459 vdpa/sfc: not in enabled drivers build config 00:01:50.459 event/*: missing internal dependency, "eventdev" 00:01:50.459 baseband/*: missing internal dependency, "bbdev" 00:01:50.459 gpu/*: missing internal dependency, "gpudev" 00:01:50.459 00:01:50.459 00:01:50.459 Build targets in project: 85 00:01:50.459 00:01:50.459 DPDK 24.03.0 00:01:50.459 00:01:50.459 User defined options 00:01:50.459 buildtype : debug 00:01:50.459 default_library : shared 00:01:50.459 libdir : lib 00:01:50.459 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:50.459 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:50.459 c_link_args : 00:01:50.459 cpu_instruction_set: native 00:01:50.459 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:50.459 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:50.459 enable_docs : false 00:01:50.459 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:50.459 enable_kmods : false 00:01:50.459 max_lcores : 128 00:01:50.459 tests : false 00:01:50.459 00:01:50.459 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:50.459 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:50.459 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:50.459 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:50.459 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:50.459 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:50.459 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:50.459 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:50.459 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:50.459 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:50.459 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:50.459 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:50.459 [11/268] Linking static target lib/librte_kvargs.a 00:01:50.459 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:50.459 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:50.459 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:50.459 [15/268] Linking static target lib/librte_log.a 00:01:50.718 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:51.288 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.288 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:51.288 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:51.288 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:51.288 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:51.288 [22/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:51.288 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:51.288 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:51.288 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:51.288 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:51.288 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:51.288 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:51.288 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:51.288 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:51.288 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:51.288 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:51.288 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:51.559 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:51.559 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:51.559 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:51.559 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:51.559 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:51.559 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:51.559 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:51.559 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:51.559 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:51.559 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:51.559 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:51.559 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:51.559 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:51.559 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:51.559 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:51.559 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:51.559 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:51.559 [51/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:51.559 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:51.559 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:51.559 [54/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:51.559 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:51.559 [56/268] Linking static target lib/librte_telemetry.a 00:01:51.559 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:51.559 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:51.559 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:51.559 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:51.817 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:51.817 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:51.817 [63/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.817 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:51.817 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:51.817 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:51.817 [67/268] Linking target lib/librte_log.so.24.1 00:01:52.079 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:52.079 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:52.079 [70/268] Linking static target lib/librte_pci.a 00:01:52.079 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:52.079 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:52.342 [73/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:52.342 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:52.342 [75/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:52.342 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:52.342 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:52.343 [78/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:52.343 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:52.343 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:52.343 [81/268] Linking target lib/librte_kvargs.so.24.1 00:01:52.343 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:52.343 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:52.343 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:52.343 [85/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:52.343 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:52.343 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:52.343 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:52.343 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:52.343 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:52.343 [91/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:52.343 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:52.343 [93/268] Linking static target lib/librte_ring.a 00:01:52.343 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:52.343 [95/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:52.604 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:52.604 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:52.604 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:52.604 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:52.604 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:52.604 [101/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:52.604 [102/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:52.604 [103/268] Linking static target lib/librte_meter.a 00:01:52.604 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:52.604 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:52.604 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:52.604 [107/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.604 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:52.604 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:52.604 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:52.604 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:52.604 [112/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:52.604 [113/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:52.604 [114/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:52.604 [115/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:52.604 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:52.604 [117/268] Linking static target lib/librte_eal.a 00:01:52.604 [118/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:52.604 [119/268] Linking static target lib/librte_rcu.a 00:01:52.604 [120/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.869 [121/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:52.869 [122/268] Linking static target lib/librte_mempool.a 00:01:52.869 [123/268] Linking target lib/librte_telemetry.so.24.1 00:01:52.869 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:52.869 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:52.869 [126/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:52.869 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:52.869 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:52.869 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:52.869 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:53.131 [131/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:53.131 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:53.131 [133/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:53.131 [134/268] Linking static target lib/librte_net.a 00:01:53.131 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:53.131 [136/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:53.131 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.131 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:53.131 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:53.131 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:53.131 [141/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.131 [142/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:53.394 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:53.394 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:53.394 [145/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:53.394 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:53.394 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:53.394 [148/268] Linking static target lib/librte_cmdline.a 00:01:53.394 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:53.394 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:53.394 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:53.394 [152/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.394 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:53.394 [154/268] Linking static target lib/librte_timer.a 00:01:53.653 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:53.653 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:53.653 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:53.653 [158/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.653 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:53.653 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:53.653 [161/268] Linking static target lib/librte_dmadev.a 00:01:53.653 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:53.653 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:53.653 [164/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:53.913 [165/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:53.913 [166/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.913 [167/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:53.913 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:53.913 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:53.913 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:53.913 [171/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:53.913 [172/268] Linking static target lib/librte_compressdev.a 00:01:53.913 [173/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.913 [174/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:53.913 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:53.913 [176/268] Linking static target lib/librte_hash.a 00:01:53.913 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:53.913 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:53.913 [179/268] Linking static target lib/librte_power.a 00:01:53.913 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:53.913 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:54.173 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:54.173 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:54.173 [184/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:54.173 [185/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:54.173 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:54.173 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:54.173 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:54.173 [189/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:54.173 [190/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:54.173 [191/268] Linking static target lib/librte_mbuf.a 00:01:54.173 [192/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.173 [193/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:54.173 [194/268] Linking static target lib/librte_reorder.a 00:01:54.431 [195/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.431 [196/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.431 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:54.431 [198/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:54.431 [199/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.431 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.431 [201/268] Linking static target drivers/librte_bus_vdev.a 00:01:54.431 [202/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.431 [203/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.431 [204/268] Linking static target drivers/librte_bus_pci.a 00:01:54.431 [205/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:54.431 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:54.431 [207/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.431 [208/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.431 [209/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:54.431 [210/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:54.431 [211/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:54.688 [212/268] Linking static target lib/librte_security.a 00:01:54.688 [213/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:54.688 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.688 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.688 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.688 [217/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:54.688 [218/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.688 [219/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.688 [220/268] Linking static target drivers/librte_mempool_ring.a 00:01:54.946 [221/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:54.946 [222/268] Linking static target lib/librte_cryptodev.a 00:01:54.946 [223/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.946 [224/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:54.946 [225/268] Linking static target lib/librte_ethdev.a 00:01:54.946 [226/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.899 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.273 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:59.174 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.174 [230/268] Linking target lib/librte_eal.so.24.1 00:01:59.174 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.174 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:59.174 [233/268] Linking target lib/librte_ring.so.24.1 00:01:59.174 [234/268] Linking target lib/librte_timer.so.24.1 00:01:59.174 [235/268] Linking target lib/librte_meter.so.24.1 00:01:59.174 [236/268] Linking target lib/librte_pci.so.24.1 00:01:59.174 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:59.174 [238/268] Linking target lib/librte_dmadev.so.24.1 00:01:59.432 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:59.432 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:59.432 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:59.432 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:59.432 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:59.432 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:59.432 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:59.432 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:59.432 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:59.432 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:59.690 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:59.690 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:59.690 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:59.690 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:59.690 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:59.690 [254/268] Linking target lib/librte_net.so.24.1 00:01:59.690 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:59.948 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:59.948 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:59.948 [258/268] Linking target lib/librte_hash.so.24.1 00:01:59.948 [259/268] Linking target lib/librte_security.so.24.1 00:01:59.948 [260/268] Linking target lib/librte_cmdline.so.24.1 00:01:59.948 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:59.948 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:00.206 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:00.206 [264/268] Linking target lib/librte_power.so.24.1 00:02:03.487 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:03.487 [266/268] Linking static target lib/librte_vhost.a 00:02:04.053 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.053 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:04.053 INFO: autodetecting backend as ninja 00:02:04.053 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:26.038 CC lib/log/log.o 00:02:26.038 CC lib/log/log_flags.o 00:02:26.038 CC lib/log/log_deprecated.o 00:02:26.038 CC lib/ut_mock/mock.o 00:02:26.038 CC lib/ut/ut.o 00:02:26.038 LIB libspdk_log.a 00:02:26.038 LIB libspdk_ut_mock.a 00:02:26.038 LIB libspdk_ut.a 00:02:26.038 SO libspdk_ut_mock.so.6.0 00:02:26.038 SO libspdk_ut.so.2.0 00:02:26.038 SO libspdk_log.so.7.1 00:02:26.038 SYMLINK libspdk_ut_mock.so 00:02:26.038 SYMLINK libspdk_ut.so 00:02:26.038 SYMLINK libspdk_log.so 00:02:26.038 CC lib/ioat/ioat.o 00:02:26.038 CC lib/dma/dma.o 00:02:26.038 CXX lib/trace_parser/trace.o 00:02:26.038 CC lib/util/base64.o 00:02:26.038 CC lib/util/bit_array.o 00:02:26.038 CC lib/util/cpuset.o 00:02:26.038 CC lib/util/crc16.o 00:02:26.038 CC lib/util/crc32.o 00:02:26.038 CC lib/util/crc32c.o 00:02:26.038 CC lib/util/crc32_ieee.o 00:02:26.038 CC lib/util/crc64.o 00:02:26.038 CC lib/util/dif.o 00:02:26.038 CC lib/util/fd.o 00:02:26.038 CC lib/util/fd_group.o 00:02:26.038 CC lib/util/file.o 00:02:26.038 CC lib/util/hexlify.o 00:02:26.038 CC lib/util/iov.o 00:02:26.038 CC lib/util/math.o 00:02:26.038 CC lib/util/net.o 00:02:26.038 CC lib/util/strerror_tls.o 00:02:26.038 CC lib/util/pipe.o 00:02:26.038 CC lib/util/string.o 00:02:26.038 CC lib/util/uuid.o 00:02:26.038 CC lib/util/zipf.o 00:02:26.038 CC lib/util/xor.o 00:02:26.038 CC lib/util/md5.o 00:02:26.038 CC lib/vfio_user/host/vfio_user_pci.o 00:02:26.038 CC lib/vfio_user/host/vfio_user.o 00:02:26.038 LIB libspdk_dma.a 00:02:26.038 SO libspdk_dma.so.5.0 00:02:26.038 SYMLINK libspdk_dma.so 00:02:26.038 LIB libspdk_ioat.a 00:02:26.038 SO libspdk_ioat.so.7.0 00:02:26.038 SYMLINK libspdk_ioat.so 00:02:26.038 LIB libspdk_vfio_user.a 00:02:26.038 SO libspdk_vfio_user.so.5.0 00:02:26.038 SYMLINK libspdk_vfio_user.so 00:02:26.038 LIB libspdk_util.a 00:02:26.038 SO libspdk_util.so.10.1 00:02:26.038 SYMLINK libspdk_util.so 00:02:26.038 CC lib/vmd/vmd.o 00:02:26.038 CC lib/conf/conf.o 00:02:26.038 LIB libspdk_trace_parser.a 00:02:26.038 CC lib/vmd/led.o 00:02:26.038 CC lib/json/json_parse.o 00:02:26.038 CC lib/idxd/idxd.o 00:02:26.038 CC lib/json/json_util.o 00:02:26.038 CC lib/idxd/idxd_user.o 00:02:26.038 CC lib/rdma_utils/rdma_utils.o 00:02:26.038 CC lib/json/json_write.o 00:02:26.038 CC lib/env_dpdk/env.o 00:02:26.038 CC lib/idxd/idxd_kernel.o 00:02:26.038 CC lib/env_dpdk/memory.o 00:02:26.038 CC lib/env_dpdk/pci.o 00:02:26.038 CC lib/env_dpdk/init.o 00:02:26.038 CC lib/env_dpdk/threads.o 00:02:26.038 CC lib/env_dpdk/pci_ioat.o 00:02:26.038 CC lib/env_dpdk/pci_virtio.o 00:02:26.038 CC lib/env_dpdk/pci_vmd.o 00:02:26.038 CC lib/env_dpdk/pci_idxd.o 00:02:26.038 CC lib/env_dpdk/pci_event.o 00:02:26.038 CC lib/env_dpdk/sigbus_handler.o 00:02:26.038 CC lib/env_dpdk/pci_dpdk.o 00:02:26.038 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:26.038 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:26.038 SO libspdk_trace_parser.so.6.0 00:02:26.038 SYMLINK libspdk_trace_parser.so 00:02:26.038 LIB libspdk_rdma_utils.a 00:02:26.038 SO libspdk_rdma_utils.so.1.0 00:02:26.038 LIB libspdk_conf.a 00:02:26.038 LIB libspdk_json.a 00:02:26.038 SO libspdk_conf.so.6.0 00:02:26.038 SO libspdk_json.so.6.0 00:02:26.038 SYMLINK libspdk_rdma_utils.so 00:02:26.038 SYMLINK libspdk_conf.so 00:02:26.038 SYMLINK libspdk_json.so 00:02:26.038 CC lib/rdma_provider/common.o 00:02:26.038 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:26.038 CC lib/jsonrpc/jsonrpc_server.o 00:02:26.038 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:26.038 CC lib/jsonrpc/jsonrpc_client.o 00:02:26.038 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:26.038 LIB libspdk_idxd.a 00:02:26.038 SO libspdk_idxd.so.12.1 00:02:26.038 LIB libspdk_vmd.a 00:02:26.038 SYMLINK libspdk_idxd.so 00:02:26.038 SO libspdk_vmd.so.6.0 00:02:26.038 SYMLINK libspdk_vmd.so 00:02:26.038 LIB libspdk_rdma_provider.a 00:02:26.038 SO libspdk_rdma_provider.so.7.0 00:02:26.038 SYMLINK libspdk_rdma_provider.so 00:02:26.038 LIB libspdk_jsonrpc.a 00:02:26.038 SO libspdk_jsonrpc.so.6.0 00:02:26.038 SYMLINK libspdk_jsonrpc.so 00:02:26.038 CC lib/rpc/rpc.o 00:02:26.039 LIB libspdk_rpc.a 00:02:26.039 SO libspdk_rpc.so.6.0 00:02:26.297 SYMLINK libspdk_rpc.so 00:02:26.297 CC lib/trace/trace.o 00:02:26.297 CC lib/trace/trace_flags.o 00:02:26.297 CC lib/trace/trace_rpc.o 00:02:26.297 CC lib/notify/notify.o 00:02:26.297 CC lib/notify/notify_rpc.o 00:02:26.297 CC lib/keyring/keyring.o 00:02:26.297 CC lib/keyring/keyring_rpc.o 00:02:26.556 LIB libspdk_notify.a 00:02:26.556 SO libspdk_notify.so.6.0 00:02:26.556 SYMLINK libspdk_notify.so 00:02:26.556 LIB libspdk_keyring.a 00:02:26.556 LIB libspdk_trace.a 00:02:26.556 SO libspdk_keyring.so.2.0 00:02:26.556 SO libspdk_trace.so.11.0 00:02:26.556 SYMLINK libspdk_keyring.so 00:02:26.814 SYMLINK libspdk_trace.so 00:02:26.814 LIB libspdk_env_dpdk.a 00:02:26.814 SO libspdk_env_dpdk.so.15.1 00:02:26.814 CC lib/thread/thread.o 00:02:26.814 CC lib/thread/iobuf.o 00:02:26.814 CC lib/sock/sock.o 00:02:26.814 CC lib/sock/sock_rpc.o 00:02:27.072 SYMLINK libspdk_env_dpdk.so 00:02:27.330 LIB libspdk_sock.a 00:02:27.330 SO libspdk_sock.so.10.0 00:02:27.330 SYMLINK libspdk_sock.so 00:02:27.588 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:27.588 CC lib/nvme/nvme_ctrlr.o 00:02:27.588 CC lib/nvme/nvme_fabric.o 00:02:27.588 CC lib/nvme/nvme_ns_cmd.o 00:02:27.588 CC lib/nvme/nvme_ns.o 00:02:27.588 CC lib/nvme/nvme_pcie_common.o 00:02:27.588 CC lib/nvme/nvme_pcie.o 00:02:27.588 CC lib/nvme/nvme_qpair.o 00:02:27.588 CC lib/nvme/nvme.o 00:02:27.588 CC lib/nvme/nvme_quirks.o 00:02:27.588 CC lib/nvme/nvme_transport.o 00:02:27.588 CC lib/nvme/nvme_discovery.o 00:02:27.588 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:27.588 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:27.588 CC lib/nvme/nvme_tcp.o 00:02:27.588 CC lib/nvme/nvme_opal.o 00:02:27.588 CC lib/nvme/nvme_io_msg.o 00:02:27.588 CC lib/nvme/nvme_poll_group.o 00:02:27.588 CC lib/nvme/nvme_zns.o 00:02:27.588 CC lib/nvme/nvme_stubs.o 00:02:27.588 CC lib/nvme/nvme_auth.o 00:02:27.588 CC lib/nvme/nvme_cuse.o 00:02:27.588 CC lib/nvme/nvme_vfio_user.o 00:02:27.588 CC lib/nvme/nvme_rdma.o 00:02:28.524 LIB libspdk_thread.a 00:02:28.524 SO libspdk_thread.so.11.0 00:02:28.524 SYMLINK libspdk_thread.so 00:02:28.782 CC lib/fsdev/fsdev.o 00:02:28.782 CC lib/accel/accel.o 00:02:28.782 CC lib/fsdev/fsdev_io.o 00:02:28.782 CC lib/virtio/virtio.o 00:02:28.782 CC lib/fsdev/fsdev_rpc.o 00:02:28.782 CC lib/vfu_tgt/tgt_endpoint.o 00:02:28.782 CC lib/init/json_config.o 00:02:28.782 CC lib/blob/blobstore.o 00:02:28.782 CC lib/accel/accel_rpc.o 00:02:28.782 CC lib/virtio/virtio_vhost_user.o 00:02:28.782 CC lib/blob/request.o 00:02:28.782 CC lib/vfu_tgt/tgt_rpc.o 00:02:28.782 CC lib/init/subsystem.o 00:02:28.782 CC lib/accel/accel_sw.o 00:02:28.782 CC lib/init/subsystem_rpc.o 00:02:28.782 CC lib/blob/zeroes.o 00:02:28.782 CC lib/virtio/virtio_vfio_user.o 00:02:28.782 CC lib/init/rpc.o 00:02:28.782 CC lib/blob/blob_bs_dev.o 00:02:28.782 CC lib/virtio/virtio_pci.o 00:02:29.041 LIB libspdk_init.a 00:02:29.041 SO libspdk_init.so.6.0 00:02:29.041 LIB libspdk_virtio.a 00:02:29.041 SYMLINK libspdk_init.so 00:02:29.041 LIB libspdk_vfu_tgt.a 00:02:29.299 SO libspdk_virtio.so.7.0 00:02:29.299 SO libspdk_vfu_tgt.so.3.0 00:02:29.299 SYMLINK libspdk_vfu_tgt.so 00:02:29.299 SYMLINK libspdk_virtio.so 00:02:29.299 CC lib/event/app.o 00:02:29.299 CC lib/event/reactor.o 00:02:29.299 CC lib/event/log_rpc.o 00:02:29.299 CC lib/event/app_rpc.o 00:02:29.299 CC lib/event/scheduler_static.o 00:02:29.558 LIB libspdk_fsdev.a 00:02:29.558 SO libspdk_fsdev.so.2.0 00:02:29.558 SYMLINK libspdk_fsdev.so 00:02:29.816 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:29.816 LIB libspdk_event.a 00:02:29.816 SO libspdk_event.so.14.0 00:02:29.816 SYMLINK libspdk_event.so 00:02:30.087 LIB libspdk_accel.a 00:02:30.087 SO libspdk_accel.so.16.0 00:02:30.087 SYMLINK libspdk_accel.so 00:02:30.087 LIB libspdk_nvme.a 00:02:30.087 SO libspdk_nvme.so.15.0 00:02:30.346 CC lib/bdev/bdev.o 00:02:30.346 CC lib/bdev/bdev_rpc.o 00:02:30.346 CC lib/bdev/bdev_zone.o 00:02:30.346 CC lib/bdev/part.o 00:02:30.346 CC lib/bdev/scsi_nvme.o 00:02:30.346 LIB libspdk_fuse_dispatcher.a 00:02:30.346 SYMLINK libspdk_nvme.so 00:02:30.346 SO libspdk_fuse_dispatcher.so.1.0 00:02:30.604 SYMLINK libspdk_fuse_dispatcher.so 00:02:31.978 LIB libspdk_blob.a 00:02:31.978 SO libspdk_blob.so.12.0 00:02:31.978 SYMLINK libspdk_blob.so 00:02:32.235 CC lib/lvol/lvol.o 00:02:32.235 CC lib/blobfs/blobfs.o 00:02:32.235 CC lib/blobfs/tree.o 00:02:33.171 LIB libspdk_bdev.a 00:02:33.171 SO libspdk_bdev.so.17.0 00:02:33.171 LIB libspdk_blobfs.a 00:02:33.171 SYMLINK libspdk_bdev.so 00:02:33.171 SO libspdk_blobfs.so.11.0 00:02:33.171 SYMLINK libspdk_blobfs.so 00:02:33.171 LIB libspdk_lvol.a 00:02:33.171 SO libspdk_lvol.so.11.0 00:02:33.171 SYMLINK libspdk_lvol.so 00:02:33.171 CC lib/scsi/dev.o 00:02:33.171 CC lib/scsi/lun.o 00:02:33.171 CC lib/ftl/ftl_core.o 00:02:33.171 CC lib/nvmf/ctrlr.o 00:02:33.171 CC lib/scsi/port.o 00:02:33.171 CC lib/nbd/nbd.o 00:02:33.171 CC lib/nvmf/ctrlr_discovery.o 00:02:33.171 CC lib/ftl/ftl_init.o 00:02:33.171 CC lib/ublk/ublk.o 00:02:33.171 CC lib/scsi/scsi.o 00:02:33.171 CC lib/nbd/nbd_rpc.o 00:02:33.171 CC lib/ftl/ftl_layout.o 00:02:33.171 CC lib/nvmf/ctrlr_bdev.o 00:02:33.171 CC lib/ublk/ublk_rpc.o 00:02:33.171 CC lib/scsi/scsi_bdev.o 00:02:33.171 CC lib/ftl/ftl_debug.o 00:02:33.171 CC lib/nvmf/subsystem.o 00:02:33.171 CC lib/ftl/ftl_io.o 00:02:33.171 CC lib/nvmf/nvmf.o 00:02:33.171 CC lib/scsi/scsi_pr.o 00:02:33.171 CC lib/scsi/scsi_rpc.o 00:02:33.171 CC lib/ftl/ftl_sb.o 00:02:33.171 CC lib/scsi/task.o 00:02:33.171 CC lib/ftl/ftl_l2p.o 00:02:33.171 CC lib/nvmf/nvmf_rpc.o 00:02:33.171 CC lib/nvmf/transport.o 00:02:33.171 CC lib/ftl/ftl_nv_cache.o 00:02:33.171 CC lib/ftl/ftl_l2p_flat.o 00:02:33.171 CC lib/nvmf/stubs.o 00:02:33.172 CC lib/nvmf/tcp.o 00:02:33.172 CC lib/ftl/ftl_band.o 00:02:33.172 CC lib/nvmf/mdns_server.o 00:02:33.172 CC lib/ftl/ftl_band_ops.o 00:02:33.172 CC lib/nvmf/vfio_user.o 00:02:33.172 CC lib/ftl/ftl_writer.o 00:02:33.172 CC lib/nvmf/rdma.o 00:02:33.172 CC lib/ftl/ftl_rq.o 00:02:33.172 CC lib/ftl/ftl_reloc.o 00:02:33.172 CC lib/nvmf/auth.o 00:02:33.172 CC lib/ftl/ftl_l2p_cache.o 00:02:33.172 CC lib/ftl/ftl_p2l.o 00:02:33.172 CC lib/ftl/ftl_p2l_log.o 00:02:33.172 CC lib/ftl/mngt/ftl_mngt.o 00:02:33.172 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:33.172 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:33.172 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:33.172 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:33.172 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:33.750 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:33.750 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:33.750 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:33.750 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:33.750 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:33.750 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:33.750 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:33.750 CC lib/ftl/utils/ftl_conf.o 00:02:33.750 CC lib/ftl/utils/ftl_md.o 00:02:33.750 CC lib/ftl/utils/ftl_mempool.o 00:02:33.750 CC lib/ftl/utils/ftl_bitmap.o 00:02:33.750 CC lib/ftl/utils/ftl_property.o 00:02:33.750 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:33.750 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:33.750 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:33.750 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:33.750 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:33.750 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:34.012 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:34.012 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:34.012 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:34.012 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:34.012 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:34.012 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:34.012 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:34.012 CC lib/ftl/base/ftl_base_dev.o 00:02:34.012 CC lib/ftl/base/ftl_base_bdev.o 00:02:34.012 CC lib/ftl/ftl_trace.o 00:02:34.012 LIB libspdk_nbd.a 00:02:34.272 SO libspdk_nbd.so.7.0 00:02:34.272 SYMLINK libspdk_nbd.so 00:02:34.272 LIB libspdk_scsi.a 00:02:34.272 SO libspdk_scsi.so.9.0 00:02:34.272 SYMLINK libspdk_scsi.so 00:02:34.530 LIB libspdk_ublk.a 00:02:34.530 SO libspdk_ublk.so.3.0 00:02:34.530 SYMLINK libspdk_ublk.so 00:02:34.530 CC lib/vhost/vhost.o 00:02:34.530 CC lib/iscsi/conn.o 00:02:34.531 CC lib/vhost/vhost_rpc.o 00:02:34.531 CC lib/iscsi/init_grp.o 00:02:34.531 CC lib/vhost/vhost_scsi.o 00:02:34.531 CC lib/iscsi/iscsi.o 00:02:34.531 CC lib/vhost/vhost_blk.o 00:02:34.531 CC lib/iscsi/param.o 00:02:34.531 CC lib/vhost/rte_vhost_user.o 00:02:34.531 CC lib/iscsi/portal_grp.o 00:02:34.531 CC lib/iscsi/tgt_node.o 00:02:34.531 CC lib/iscsi/iscsi_subsystem.o 00:02:34.531 CC lib/iscsi/iscsi_rpc.o 00:02:34.531 CC lib/iscsi/task.o 00:02:34.789 LIB libspdk_ftl.a 00:02:35.048 SO libspdk_ftl.so.9.0 00:02:35.306 SYMLINK libspdk_ftl.so 00:02:35.873 LIB libspdk_vhost.a 00:02:35.873 SO libspdk_vhost.so.8.0 00:02:35.873 SYMLINK libspdk_vhost.so 00:02:35.873 LIB libspdk_nvmf.a 00:02:36.132 SO libspdk_nvmf.so.20.0 00:02:36.132 LIB libspdk_iscsi.a 00:02:36.132 SO libspdk_iscsi.so.8.0 00:02:36.132 SYMLINK libspdk_nvmf.so 00:02:36.132 SYMLINK libspdk_iscsi.so 00:02:36.390 CC module/env_dpdk/env_dpdk_rpc.o 00:02:36.390 CC module/vfu_device/vfu_virtio.o 00:02:36.390 CC module/vfu_device/vfu_virtio_blk.o 00:02:36.390 CC module/vfu_device/vfu_virtio_scsi.o 00:02:36.390 CC module/vfu_device/vfu_virtio_rpc.o 00:02:36.390 CC module/vfu_device/vfu_virtio_fs.o 00:02:36.648 CC module/keyring/linux/keyring.o 00:02:36.648 CC module/accel/error/accel_error.o 00:02:36.648 CC module/accel/iaa/accel_iaa.o 00:02:36.648 CC module/keyring/linux/keyring_rpc.o 00:02:36.648 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:36.648 CC module/accel/error/accel_error_rpc.o 00:02:36.648 CC module/blob/bdev/blob_bdev.o 00:02:36.648 CC module/scheduler/gscheduler/gscheduler.o 00:02:36.648 CC module/accel/iaa/accel_iaa_rpc.o 00:02:36.648 CC module/keyring/file/keyring.o 00:02:36.648 CC module/fsdev/aio/fsdev_aio.o 00:02:36.648 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:36.648 CC module/keyring/file/keyring_rpc.o 00:02:36.648 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:36.648 CC module/sock/posix/posix.o 00:02:36.648 CC module/fsdev/aio/linux_aio_mgr.o 00:02:36.648 CC module/accel/ioat/accel_ioat.o 00:02:36.648 CC module/accel/dsa/accel_dsa.o 00:02:36.648 CC module/accel/ioat/accel_ioat_rpc.o 00:02:36.648 CC module/accel/dsa/accel_dsa_rpc.o 00:02:36.648 LIB libspdk_env_dpdk_rpc.a 00:02:36.648 SO libspdk_env_dpdk_rpc.so.6.0 00:02:36.648 SYMLINK libspdk_env_dpdk_rpc.so 00:02:36.906 LIB libspdk_scheduler_gscheduler.a 00:02:36.906 LIB libspdk_scheduler_dpdk_governor.a 00:02:36.906 SO libspdk_scheduler_gscheduler.so.4.0 00:02:36.906 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:36.906 LIB libspdk_accel_ioat.a 00:02:36.906 LIB libspdk_scheduler_dynamic.a 00:02:36.906 LIB libspdk_keyring_file.a 00:02:36.906 SYMLINK libspdk_scheduler_gscheduler.so 00:02:36.906 SO libspdk_accel_ioat.so.6.0 00:02:36.906 SO libspdk_scheduler_dynamic.so.4.0 00:02:36.906 LIB libspdk_keyring_linux.a 00:02:36.906 SO libspdk_keyring_file.so.2.0 00:02:36.906 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:36.906 SO libspdk_keyring_linux.so.1.0 00:02:36.906 SYMLINK libspdk_scheduler_dynamic.so 00:02:36.906 SYMLINK libspdk_accel_ioat.so 00:02:36.906 SYMLINK libspdk_keyring_file.so 00:02:36.906 LIB libspdk_blob_bdev.a 00:02:36.906 LIB libspdk_accel_iaa.a 00:02:36.906 LIB libspdk_accel_error.a 00:02:36.906 LIB libspdk_accel_dsa.a 00:02:36.906 SYMLINK libspdk_keyring_linux.so 00:02:36.906 SO libspdk_blob_bdev.so.12.0 00:02:36.906 SO libspdk_accel_iaa.so.3.0 00:02:36.906 SO libspdk_accel_error.so.2.0 00:02:36.906 SO libspdk_accel_dsa.so.5.0 00:02:36.906 SYMLINK libspdk_blob_bdev.so 00:02:36.906 SYMLINK libspdk_accel_iaa.so 00:02:36.906 SYMLINK libspdk_accel_error.so 00:02:36.906 SYMLINK libspdk_accel_dsa.so 00:02:37.165 LIB libspdk_vfu_device.a 00:02:37.165 SO libspdk_vfu_device.so.3.0 00:02:37.424 CC module/blobfs/bdev/blobfs_bdev.o 00:02:37.424 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:37.424 CC module/bdev/raid/bdev_raid.o 00:02:37.424 CC module/bdev/nvme/bdev_nvme.o 00:02:37.424 CC module/bdev/lvol/vbdev_lvol.o 00:02:37.424 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:37.424 CC module/bdev/raid/bdev_raid_rpc.o 00:02:37.424 CC module/bdev/error/vbdev_error.o 00:02:37.424 CC module/bdev/raid/bdev_raid_sb.o 00:02:37.424 CC module/bdev/split/vbdev_split.o 00:02:37.424 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:37.424 CC module/bdev/nvme/nvme_rpc.o 00:02:37.424 CC module/bdev/delay/vbdev_delay.o 00:02:37.424 CC module/bdev/passthru/vbdev_passthru.o 00:02:37.424 CC module/bdev/gpt/gpt.o 00:02:37.424 CC module/bdev/split/vbdev_split_rpc.o 00:02:37.424 CC module/bdev/error/vbdev_error_rpc.o 00:02:37.424 CC module/bdev/gpt/vbdev_gpt.o 00:02:37.424 CC module/bdev/nvme/bdev_mdns_client.o 00:02:37.424 CC module/bdev/malloc/bdev_malloc.o 00:02:37.424 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:37.424 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:37.424 CC module/bdev/null/bdev_null.o 00:02:37.424 CC module/bdev/raid/raid0.o 00:02:37.424 CC module/bdev/nvme/vbdev_opal.o 00:02:37.424 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:37.424 CC module/bdev/iscsi/bdev_iscsi.o 00:02:37.424 CC module/bdev/aio/bdev_aio.o 00:02:37.424 CC module/bdev/null/bdev_null_rpc.o 00:02:37.424 CC module/bdev/raid/raid1.o 00:02:37.424 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:37.424 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:37.424 CC module/bdev/aio/bdev_aio_rpc.o 00:02:37.424 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:37.424 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:37.424 CC module/bdev/raid/concat.o 00:02:37.424 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:37.424 CC module/bdev/ftl/bdev_ftl.o 00:02:37.424 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:37.424 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:37.424 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:37.424 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:37.424 SYMLINK libspdk_vfu_device.so 00:02:37.424 LIB libspdk_fsdev_aio.a 00:02:37.424 SO libspdk_fsdev_aio.so.1.0 00:02:37.424 LIB libspdk_sock_posix.a 00:02:37.681 SO libspdk_sock_posix.so.6.0 00:02:37.681 SYMLINK libspdk_fsdev_aio.so 00:02:37.681 LIB libspdk_blobfs_bdev.a 00:02:37.681 SO libspdk_blobfs_bdev.so.6.0 00:02:37.681 SYMLINK libspdk_sock_posix.so 00:02:37.682 LIB libspdk_bdev_split.a 00:02:37.682 SYMLINK libspdk_blobfs_bdev.so 00:02:37.682 SO libspdk_bdev_split.so.6.0 00:02:37.682 LIB libspdk_bdev_error.a 00:02:37.682 LIB libspdk_bdev_gpt.a 00:02:37.682 SO libspdk_bdev_error.so.6.0 00:02:37.939 SO libspdk_bdev_gpt.so.6.0 00:02:37.939 LIB libspdk_bdev_null.a 00:02:37.939 SYMLINK libspdk_bdev_split.so 00:02:37.939 SO libspdk_bdev_null.so.6.0 00:02:37.939 SYMLINK libspdk_bdev_error.so 00:02:37.939 LIB libspdk_bdev_ftl.a 00:02:37.939 SYMLINK libspdk_bdev_gpt.so 00:02:37.939 LIB libspdk_bdev_passthru.a 00:02:37.939 LIB libspdk_bdev_malloc.a 00:02:37.939 LIB libspdk_bdev_iscsi.a 00:02:37.939 LIB libspdk_bdev_aio.a 00:02:37.939 SO libspdk_bdev_ftl.so.6.0 00:02:37.939 SO libspdk_bdev_passthru.so.6.0 00:02:37.939 SO libspdk_bdev_malloc.so.6.0 00:02:37.939 SO libspdk_bdev_aio.so.6.0 00:02:37.939 SO libspdk_bdev_iscsi.so.6.0 00:02:37.939 LIB libspdk_bdev_delay.a 00:02:37.939 SYMLINK libspdk_bdev_null.so 00:02:37.939 LIB libspdk_bdev_zone_block.a 00:02:37.939 SO libspdk_bdev_delay.so.6.0 00:02:37.939 SO libspdk_bdev_zone_block.so.6.0 00:02:37.939 SYMLINK libspdk_bdev_ftl.so 00:02:37.939 SYMLINK libspdk_bdev_passthru.so 00:02:37.939 SYMLINK libspdk_bdev_aio.so 00:02:37.939 SYMLINK libspdk_bdev_iscsi.so 00:02:37.939 SYMLINK libspdk_bdev_malloc.so 00:02:37.939 LIB libspdk_bdev_lvol.a 00:02:37.939 SYMLINK libspdk_bdev_delay.so 00:02:37.939 SO libspdk_bdev_lvol.so.6.0 00:02:37.939 SYMLINK libspdk_bdev_zone_block.so 00:02:37.939 SYMLINK libspdk_bdev_lvol.so 00:02:38.197 LIB libspdk_bdev_virtio.a 00:02:38.197 SO libspdk_bdev_virtio.so.6.0 00:02:38.197 SYMLINK libspdk_bdev_virtio.so 00:02:38.456 LIB libspdk_bdev_raid.a 00:02:38.716 SO libspdk_bdev_raid.so.6.0 00:02:38.716 SYMLINK libspdk_bdev_raid.so 00:02:40.095 LIB libspdk_bdev_nvme.a 00:02:40.095 SO libspdk_bdev_nvme.so.7.1 00:02:40.095 SYMLINK libspdk_bdev_nvme.so 00:02:40.661 CC module/event/subsystems/keyring/keyring.o 00:02:40.661 CC module/event/subsystems/iobuf/iobuf.o 00:02:40.661 CC module/event/subsystems/scheduler/scheduler.o 00:02:40.661 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:40.661 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:40.661 CC module/event/subsystems/fsdev/fsdev.o 00:02:40.661 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:40.661 CC module/event/subsystems/sock/sock.o 00:02:40.661 CC module/event/subsystems/vmd/vmd.o 00:02:40.661 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:40.661 LIB libspdk_event_keyring.a 00:02:40.661 LIB libspdk_event_scheduler.a 00:02:40.661 LIB libspdk_event_vhost_blk.a 00:02:40.661 LIB libspdk_event_fsdev.a 00:02:40.661 LIB libspdk_event_vfu_tgt.a 00:02:40.661 LIB libspdk_event_vmd.a 00:02:40.661 LIB libspdk_event_sock.a 00:02:40.661 SO libspdk_event_keyring.so.1.0 00:02:40.661 LIB libspdk_event_iobuf.a 00:02:40.661 SO libspdk_event_scheduler.so.4.0 00:02:40.661 SO libspdk_event_vhost_blk.so.3.0 00:02:40.661 SO libspdk_event_fsdev.so.1.0 00:02:40.661 SO libspdk_event_vfu_tgt.so.3.0 00:02:40.661 SO libspdk_event_vmd.so.6.0 00:02:40.661 SO libspdk_event_sock.so.5.0 00:02:40.661 SO libspdk_event_iobuf.so.3.0 00:02:40.661 SYMLINK libspdk_event_keyring.so 00:02:40.661 SYMLINK libspdk_event_fsdev.so 00:02:40.661 SYMLINK libspdk_event_vhost_blk.so 00:02:40.661 SYMLINK libspdk_event_scheduler.so 00:02:40.661 SYMLINK libspdk_event_vfu_tgt.so 00:02:40.661 SYMLINK libspdk_event_sock.so 00:02:40.661 SYMLINK libspdk_event_vmd.so 00:02:40.919 SYMLINK libspdk_event_iobuf.so 00:02:40.919 CC module/event/subsystems/accel/accel.o 00:02:41.178 LIB libspdk_event_accel.a 00:02:41.178 SO libspdk_event_accel.so.6.0 00:02:41.178 SYMLINK libspdk_event_accel.so 00:02:41.438 CC module/event/subsystems/bdev/bdev.o 00:02:41.697 LIB libspdk_event_bdev.a 00:02:41.697 SO libspdk_event_bdev.so.6.0 00:02:41.697 SYMLINK libspdk_event_bdev.so 00:02:41.955 CC module/event/subsystems/nbd/nbd.o 00:02:41.955 CC module/event/subsystems/ublk/ublk.o 00:02:41.955 CC module/event/subsystems/scsi/scsi.o 00:02:41.955 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:41.955 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:41.955 LIB libspdk_event_nbd.a 00:02:41.955 LIB libspdk_event_ublk.a 00:02:41.955 SO libspdk_event_nbd.so.6.0 00:02:41.955 SO libspdk_event_ublk.so.3.0 00:02:41.955 LIB libspdk_event_scsi.a 00:02:41.955 SO libspdk_event_scsi.so.6.0 00:02:41.955 SYMLINK libspdk_event_nbd.so 00:02:41.955 SYMLINK libspdk_event_ublk.so 00:02:42.215 SYMLINK libspdk_event_scsi.so 00:02:42.215 LIB libspdk_event_nvmf.a 00:02:42.215 SO libspdk_event_nvmf.so.6.0 00:02:42.215 SYMLINK libspdk_event_nvmf.so 00:02:42.215 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:42.215 CC module/event/subsystems/iscsi/iscsi.o 00:02:42.475 LIB libspdk_event_vhost_scsi.a 00:02:42.475 SO libspdk_event_vhost_scsi.so.3.0 00:02:42.475 LIB libspdk_event_iscsi.a 00:02:42.475 SO libspdk_event_iscsi.so.6.0 00:02:42.475 SYMLINK libspdk_event_vhost_scsi.so 00:02:42.475 SYMLINK libspdk_event_iscsi.so 00:02:42.733 SO libspdk.so.6.0 00:02:42.733 SYMLINK libspdk.so 00:02:42.733 CXX app/trace/trace.o 00:02:42.733 CC app/trace_record/trace_record.o 00:02:42.733 CC test/rpc_client/rpc_client_test.o 00:02:42.733 CC app/spdk_top/spdk_top.o 00:02:42.733 CC app/spdk_nvme_discover/discovery_aer.o 00:02:42.733 CC app/spdk_nvme_identify/identify.o 00:02:42.733 TEST_HEADER include/spdk/accel.h 00:02:42.733 TEST_HEADER include/spdk/accel_module.h 00:02:42.733 TEST_HEADER include/spdk/assert.h 00:02:42.733 TEST_HEADER include/spdk/barrier.h 00:02:42.733 TEST_HEADER include/spdk/base64.h 00:02:42.733 CC app/spdk_lspci/spdk_lspci.o 00:02:42.733 TEST_HEADER include/spdk/bdev.h 00:02:42.733 TEST_HEADER include/spdk/bdev_module.h 00:02:42.733 TEST_HEADER include/spdk/bdev_zone.h 00:02:42.733 CC app/spdk_nvme_perf/perf.o 00:02:42.997 TEST_HEADER include/spdk/bit_array.h 00:02:42.997 TEST_HEADER include/spdk/bit_pool.h 00:02:42.997 TEST_HEADER include/spdk/blob_bdev.h 00:02:42.997 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:42.997 TEST_HEADER include/spdk/blobfs.h 00:02:42.997 TEST_HEADER include/spdk/blob.h 00:02:42.997 TEST_HEADER include/spdk/conf.h 00:02:42.997 TEST_HEADER include/spdk/config.h 00:02:42.997 TEST_HEADER include/spdk/crc16.h 00:02:42.997 TEST_HEADER include/spdk/cpuset.h 00:02:42.997 TEST_HEADER include/spdk/crc32.h 00:02:42.997 TEST_HEADER include/spdk/crc64.h 00:02:42.997 TEST_HEADER include/spdk/dif.h 00:02:42.997 TEST_HEADER include/spdk/dma.h 00:02:42.997 TEST_HEADER include/spdk/env_dpdk.h 00:02:42.997 TEST_HEADER include/spdk/endian.h 00:02:42.997 TEST_HEADER include/spdk/env.h 00:02:42.997 TEST_HEADER include/spdk/event.h 00:02:42.997 TEST_HEADER include/spdk/fd_group.h 00:02:42.997 TEST_HEADER include/spdk/fd.h 00:02:42.997 TEST_HEADER include/spdk/file.h 00:02:42.997 TEST_HEADER include/spdk/fsdev.h 00:02:42.997 TEST_HEADER include/spdk/fsdev_module.h 00:02:42.997 TEST_HEADER include/spdk/ftl.h 00:02:42.997 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:42.997 TEST_HEADER include/spdk/gpt_spec.h 00:02:42.997 TEST_HEADER include/spdk/hexlify.h 00:02:42.997 TEST_HEADER include/spdk/histogram_data.h 00:02:42.997 TEST_HEADER include/spdk/idxd.h 00:02:42.997 TEST_HEADER include/spdk/init.h 00:02:42.997 TEST_HEADER include/spdk/idxd_spec.h 00:02:42.997 TEST_HEADER include/spdk/ioat.h 00:02:42.997 TEST_HEADER include/spdk/ioat_spec.h 00:02:42.997 TEST_HEADER include/spdk/iscsi_spec.h 00:02:42.997 TEST_HEADER include/spdk/json.h 00:02:42.997 TEST_HEADER include/spdk/jsonrpc.h 00:02:42.997 TEST_HEADER include/spdk/keyring.h 00:02:42.997 TEST_HEADER include/spdk/keyring_module.h 00:02:42.997 TEST_HEADER include/spdk/likely.h 00:02:42.997 TEST_HEADER include/spdk/log.h 00:02:42.997 TEST_HEADER include/spdk/lvol.h 00:02:42.997 TEST_HEADER include/spdk/md5.h 00:02:42.997 TEST_HEADER include/spdk/memory.h 00:02:42.997 TEST_HEADER include/spdk/mmio.h 00:02:42.997 TEST_HEADER include/spdk/nbd.h 00:02:42.997 TEST_HEADER include/spdk/net.h 00:02:42.997 TEST_HEADER include/spdk/notify.h 00:02:42.997 TEST_HEADER include/spdk/nvme.h 00:02:42.997 TEST_HEADER include/spdk/nvme_intel.h 00:02:42.997 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:42.997 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:42.998 TEST_HEADER include/spdk/nvme_spec.h 00:02:42.998 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:42.998 TEST_HEADER include/spdk/nvme_zns.h 00:02:42.998 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:42.998 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:42.998 TEST_HEADER include/spdk/nvmf.h 00:02:42.998 TEST_HEADER include/spdk/nvmf_transport.h 00:02:42.998 TEST_HEADER include/spdk/nvmf_spec.h 00:02:42.998 TEST_HEADER include/spdk/opal_spec.h 00:02:42.998 TEST_HEADER include/spdk/opal.h 00:02:42.998 TEST_HEADER include/spdk/pci_ids.h 00:02:42.998 TEST_HEADER include/spdk/pipe.h 00:02:42.998 TEST_HEADER include/spdk/queue.h 00:02:42.998 TEST_HEADER include/spdk/reduce.h 00:02:42.998 TEST_HEADER include/spdk/rpc.h 00:02:42.998 TEST_HEADER include/spdk/scheduler.h 00:02:42.998 TEST_HEADER include/spdk/scsi_spec.h 00:02:42.998 TEST_HEADER include/spdk/scsi.h 00:02:42.998 TEST_HEADER include/spdk/stdinc.h 00:02:42.998 TEST_HEADER include/spdk/sock.h 00:02:42.998 TEST_HEADER include/spdk/thread.h 00:02:42.998 TEST_HEADER include/spdk/string.h 00:02:42.998 TEST_HEADER include/spdk/trace.h 00:02:42.998 TEST_HEADER include/spdk/trace_parser.h 00:02:42.998 TEST_HEADER include/spdk/tree.h 00:02:42.998 TEST_HEADER include/spdk/ublk.h 00:02:42.998 TEST_HEADER include/spdk/util.h 00:02:42.998 TEST_HEADER include/spdk/uuid.h 00:02:42.998 TEST_HEADER include/spdk/version.h 00:02:42.998 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:42.998 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:42.998 TEST_HEADER include/spdk/vhost.h 00:02:42.998 TEST_HEADER include/spdk/vmd.h 00:02:42.998 TEST_HEADER include/spdk/xor.h 00:02:42.998 TEST_HEADER include/spdk/zipf.h 00:02:42.998 CXX test/cpp_headers/accel.o 00:02:42.998 CXX test/cpp_headers/accel_module.o 00:02:42.998 CXX test/cpp_headers/barrier.o 00:02:42.998 CXX test/cpp_headers/assert.o 00:02:42.998 CXX test/cpp_headers/base64.o 00:02:42.998 CXX test/cpp_headers/bdev.o 00:02:42.998 CXX test/cpp_headers/bdev_module.o 00:02:42.998 CXX test/cpp_headers/bdev_zone.o 00:02:42.998 CXX test/cpp_headers/bit_array.o 00:02:42.998 CXX test/cpp_headers/bit_pool.o 00:02:42.998 CXX test/cpp_headers/blob_bdev.o 00:02:42.998 CXX test/cpp_headers/blobfs_bdev.o 00:02:42.998 CXX test/cpp_headers/blobfs.o 00:02:42.998 CXX test/cpp_headers/blob.o 00:02:42.998 CXX test/cpp_headers/conf.o 00:02:42.998 CC app/iscsi_tgt/iscsi_tgt.o 00:02:42.998 CXX test/cpp_headers/config.o 00:02:42.998 CXX test/cpp_headers/cpuset.o 00:02:42.998 CXX test/cpp_headers/crc16.o 00:02:42.998 CC app/spdk_dd/spdk_dd.o 00:02:42.998 CC app/nvmf_tgt/nvmf_main.o 00:02:42.998 CXX test/cpp_headers/crc32.o 00:02:42.998 CC app/spdk_tgt/spdk_tgt.o 00:02:42.998 CC examples/ioat/verify/verify.o 00:02:42.998 CC test/thread/poller_perf/poller_perf.o 00:02:42.998 CC examples/ioat/perf/perf.o 00:02:42.998 CC examples/util/zipf/zipf.o 00:02:42.998 CC test/app/jsoncat/jsoncat.o 00:02:42.998 CC test/app/histogram_perf/histogram_perf.o 00:02:42.998 CC test/env/vtophys/vtophys.o 00:02:42.998 CC test/env/memory/memory_ut.o 00:02:42.998 CC test/app/stub/stub.o 00:02:42.998 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:42.998 CC app/fio/nvme/fio_plugin.o 00:02:42.998 CC test/env/pci/pci_ut.o 00:02:42.998 CC test/dma/test_dma/test_dma.o 00:02:42.998 CC test/app/bdev_svc/bdev_svc.o 00:02:42.998 CC app/fio/bdev/fio_plugin.o 00:02:43.260 LINK spdk_lspci 00:02:43.260 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:43.260 CC test/env/mem_callbacks/mem_callbacks.o 00:02:43.260 LINK rpc_client_test 00:02:43.260 LINK interrupt_tgt 00:02:43.260 LINK spdk_nvme_discover 00:02:43.260 LINK jsoncat 00:02:43.260 CXX test/cpp_headers/crc64.o 00:02:43.260 LINK zipf 00:02:43.260 LINK poller_perf 00:02:43.260 CXX test/cpp_headers/dif.o 00:02:43.260 LINK vtophys 00:02:43.532 LINK histogram_perf 00:02:43.532 CXX test/cpp_headers/dma.o 00:02:43.532 LINK env_dpdk_post_init 00:02:43.532 CXX test/cpp_headers/endian.o 00:02:43.532 CXX test/cpp_headers/env_dpdk.o 00:02:43.532 LINK spdk_trace_record 00:02:43.532 CXX test/cpp_headers/env.o 00:02:43.532 LINK stub 00:02:43.532 CXX test/cpp_headers/event.o 00:02:43.532 CXX test/cpp_headers/fd_group.o 00:02:43.532 LINK nvmf_tgt 00:02:43.532 CXX test/cpp_headers/fd.o 00:02:43.532 CXX test/cpp_headers/file.o 00:02:43.532 CXX test/cpp_headers/fsdev.o 00:02:43.532 LINK iscsi_tgt 00:02:43.532 CXX test/cpp_headers/fsdev_module.o 00:02:43.532 CXX test/cpp_headers/ftl.o 00:02:43.532 CXX test/cpp_headers/fuse_dispatcher.o 00:02:43.532 CXX test/cpp_headers/gpt_spec.o 00:02:43.532 CXX test/cpp_headers/hexlify.o 00:02:43.532 CXX test/cpp_headers/histogram_data.o 00:02:43.532 LINK verify 00:02:43.532 LINK spdk_tgt 00:02:43.532 LINK ioat_perf 00:02:43.532 LINK bdev_svc 00:02:43.532 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:43.532 CXX test/cpp_headers/idxd.o 00:02:43.532 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:43.532 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:43.791 CXX test/cpp_headers/idxd_spec.o 00:02:43.791 CXX test/cpp_headers/init.o 00:02:43.791 CXX test/cpp_headers/ioat.o 00:02:43.791 LINK spdk_dd 00:02:43.791 CXX test/cpp_headers/ioat_spec.o 00:02:43.791 CXX test/cpp_headers/iscsi_spec.o 00:02:43.791 CXX test/cpp_headers/json.o 00:02:43.791 CXX test/cpp_headers/jsonrpc.o 00:02:43.791 LINK spdk_trace 00:02:43.791 CXX test/cpp_headers/keyring.o 00:02:43.791 CXX test/cpp_headers/keyring_module.o 00:02:43.791 CXX test/cpp_headers/log.o 00:02:43.791 CXX test/cpp_headers/likely.o 00:02:43.791 CXX test/cpp_headers/lvol.o 00:02:43.791 CXX test/cpp_headers/md5.o 00:02:43.791 CXX test/cpp_headers/memory.o 00:02:43.791 CXX test/cpp_headers/mmio.o 00:02:43.791 CXX test/cpp_headers/nbd.o 00:02:43.791 LINK pci_ut 00:02:43.791 CXX test/cpp_headers/net.o 00:02:43.791 CXX test/cpp_headers/notify.o 00:02:43.791 CXX test/cpp_headers/nvme.o 00:02:43.791 CXX test/cpp_headers/nvme_intel.o 00:02:43.791 CXX test/cpp_headers/nvme_ocssd.o 00:02:43.791 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:43.791 CXX test/cpp_headers/nvme_spec.o 00:02:44.057 CXX test/cpp_headers/nvme_zns.o 00:02:44.057 CXX test/cpp_headers/nvmf_cmd.o 00:02:44.057 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:44.057 CC test/event/event_perf/event_perf.o 00:02:44.057 CXX test/cpp_headers/nvmf.o 00:02:44.057 CC examples/sock/hello_world/hello_sock.o 00:02:44.057 CC test/event/reactor/reactor.o 00:02:44.057 CC examples/vmd/lsvmd/lsvmd.o 00:02:44.057 CXX test/cpp_headers/nvmf_spec.o 00:02:44.057 LINK nvme_fuzz 00:02:44.057 CC examples/thread/thread/thread_ex.o 00:02:44.057 CC test/event/reactor_perf/reactor_perf.o 00:02:44.057 CC examples/idxd/perf/perf.o 00:02:44.057 CXX test/cpp_headers/nvmf_transport.o 00:02:44.057 CC examples/vmd/led/led.o 00:02:44.057 CC test/event/app_repeat/app_repeat.o 00:02:44.057 LINK spdk_bdev 00:02:44.057 LINK test_dma 00:02:44.057 LINK spdk_nvme 00:02:44.057 CXX test/cpp_headers/opal.o 00:02:44.057 CXX test/cpp_headers/opal_spec.o 00:02:44.321 CXX test/cpp_headers/pci_ids.o 00:02:44.321 CC test/event/scheduler/scheduler.o 00:02:44.321 CXX test/cpp_headers/pipe.o 00:02:44.321 CXX test/cpp_headers/queue.o 00:02:44.321 CXX test/cpp_headers/reduce.o 00:02:44.321 CXX test/cpp_headers/rpc.o 00:02:44.321 CXX test/cpp_headers/scheduler.o 00:02:44.321 CXX test/cpp_headers/scsi.o 00:02:44.321 CXX test/cpp_headers/scsi_spec.o 00:02:44.321 CXX test/cpp_headers/sock.o 00:02:44.321 CXX test/cpp_headers/stdinc.o 00:02:44.321 CXX test/cpp_headers/string.o 00:02:44.321 CXX test/cpp_headers/thread.o 00:02:44.321 CXX test/cpp_headers/trace.o 00:02:44.321 CXX test/cpp_headers/trace_parser.o 00:02:44.321 CXX test/cpp_headers/tree.o 00:02:44.321 CXX test/cpp_headers/ublk.o 00:02:44.321 CXX test/cpp_headers/util.o 00:02:44.321 CXX test/cpp_headers/uuid.o 00:02:44.321 LINK reactor 00:02:44.321 LINK event_perf 00:02:44.321 LINK lsvmd 00:02:44.321 CXX test/cpp_headers/version.o 00:02:44.321 CXX test/cpp_headers/vfio_user_pci.o 00:02:44.321 CXX test/cpp_headers/vfio_user_spec.o 00:02:44.321 CC app/vhost/vhost.o 00:02:44.321 CXX test/cpp_headers/vhost.o 00:02:44.321 LINK reactor_perf 00:02:44.321 CXX test/cpp_headers/vmd.o 00:02:44.321 CXX test/cpp_headers/xor.o 00:02:44.321 LINK led 00:02:44.321 CXX test/cpp_headers/zipf.o 00:02:44.583 LINK mem_callbacks 00:02:44.583 LINK app_repeat 00:02:44.583 LINK vhost_fuzz 00:02:44.583 LINK spdk_nvme_perf 00:02:44.583 LINK spdk_nvme_identify 00:02:44.583 LINK hello_sock 00:02:44.583 LINK spdk_top 00:02:44.583 LINK thread 00:02:44.843 LINK scheduler 00:02:44.843 LINK idxd_perf 00:02:44.843 LINK vhost 00:02:44.843 CC test/nvme/aer/aer.o 00:02:44.843 CC test/nvme/reset/reset.o 00:02:44.843 CC test/nvme/startup/startup.o 00:02:44.843 CC test/nvme/fdp/fdp.o 00:02:44.843 CC test/nvme/connect_stress/connect_stress.o 00:02:44.843 CC test/nvme/e2edp/nvme_dp.o 00:02:44.843 CC test/nvme/sgl/sgl.o 00:02:44.843 CC test/nvme/overhead/overhead.o 00:02:44.843 CC test/nvme/reserve/reserve.o 00:02:44.843 CC test/nvme/cuse/cuse.o 00:02:44.843 CC test/nvme/boot_partition/boot_partition.o 00:02:44.843 CC test/nvme/simple_copy/simple_copy.o 00:02:44.843 CC test/nvme/err_injection/err_injection.o 00:02:44.843 CC test/nvme/compliance/nvme_compliance.o 00:02:44.843 CC test/nvme/fused_ordering/fused_ordering.o 00:02:44.843 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:44.843 CC test/blobfs/mkfs/mkfs.o 00:02:44.843 CC test/accel/dif/dif.o 00:02:44.843 CC test/lvol/esnap/esnap.o 00:02:45.102 CC examples/nvme/hello_world/hello_world.o 00:02:45.102 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:45.102 CC examples/nvme/reconnect/reconnect.o 00:02:45.102 CC examples/nvme/arbitration/arbitration.o 00:02:45.102 CC examples/nvme/hotplug/hotplug.o 00:02:45.102 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:45.102 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:45.102 CC examples/nvme/abort/abort.o 00:02:45.102 LINK boot_partition 00:02:45.102 LINK doorbell_aers 00:02:45.102 LINK connect_stress 00:02:45.102 LINK err_injection 00:02:45.102 LINK startup 00:02:45.102 LINK mkfs 00:02:45.102 LINK simple_copy 00:02:45.102 LINK memory_ut 00:02:45.102 LINK sgl 00:02:45.102 LINK nvme_dp 00:02:45.102 CC examples/accel/perf/accel_perf.o 00:02:45.102 LINK aer 00:02:45.361 LINK overhead 00:02:45.361 LINK reserve 00:02:45.361 CC examples/blob/hello_world/hello_blob.o 00:02:45.361 LINK fused_ordering 00:02:45.361 LINK fdp 00:02:45.361 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:45.361 CC examples/blob/cli/blobcli.o 00:02:45.361 LINK reset 00:02:45.361 LINK pmr_persistence 00:02:45.361 LINK hello_world 00:02:45.361 LINK nvme_compliance 00:02:45.361 LINK cmb_copy 00:02:45.361 LINK hotplug 00:02:45.618 LINK arbitration 00:02:45.618 LINK reconnect 00:02:45.618 LINK abort 00:02:45.618 LINK hello_fsdev 00:02:45.618 LINK hello_blob 00:02:45.618 LINK nvme_manage 00:02:45.875 LINK accel_perf 00:02:45.875 LINK dif 00:02:45.875 LINK blobcli 00:02:45.875 LINK iscsi_fuzz 00:02:46.133 CC examples/bdev/hello_world/hello_bdev.o 00:02:46.133 CC examples/bdev/bdevperf/bdevperf.o 00:02:46.133 CC test/bdev/bdevio/bdevio.o 00:02:46.391 LINK hello_bdev 00:02:46.391 LINK cuse 00:02:46.649 LINK bdevio 00:02:46.907 LINK bdevperf 00:02:47.544 CC examples/nvmf/nvmf/nvmf.o 00:02:47.804 LINK nvmf 00:02:50.338 LINK esnap 00:02:50.338 00:02:50.338 real 1m10.200s 00:02:50.338 user 11m52.196s 00:02:50.338 sys 2m41.003s 00:02:50.338 17:52:13 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:50.338 17:52:13 make -- common/autotest_common.sh@10 -- $ set +x 00:02:50.338 ************************************ 00:02:50.338 END TEST make 00:02:50.338 ************************************ 00:02:50.338 17:52:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:50.338 17:52:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:50.338 17:52:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:50.338 17:52:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.338 17:52:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:50.338 17:52:13 -- pm/common@44 -- $ pid=1267738 00:02:50.338 17:52:13 -- pm/common@50 -- $ kill -TERM 1267738 00:02:50.338 17:52:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.338 17:52:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:50.338 17:52:13 -- pm/common@44 -- $ pid=1267740 00:02:50.338 17:52:13 -- pm/common@50 -- $ kill -TERM 1267740 00:02:50.338 17:52:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.338 17:52:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:50.338 17:52:13 -- pm/common@44 -- $ pid=1267742 00:02:50.338 17:52:13 -- pm/common@50 -- $ kill -TERM 1267742 00:02:50.338 17:52:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.338 17:52:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:50.338 17:52:13 -- pm/common@44 -- $ pid=1267774 00:02:50.339 17:52:13 -- pm/common@50 -- $ sudo -E kill -TERM 1267774 00:02:50.597 17:52:13 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:50.597 17:52:13 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:50.597 17:52:13 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:50.597 17:52:13 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:50.597 17:52:13 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:50.597 17:52:13 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:50.597 17:52:13 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:50.597 17:52:13 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:50.597 17:52:13 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:50.597 17:52:13 -- scripts/common.sh@336 -- # IFS=.-: 00:02:50.597 17:52:13 -- scripts/common.sh@336 -- # read -ra ver1 00:02:50.597 17:52:13 -- scripts/common.sh@337 -- # IFS=.-: 00:02:50.597 17:52:13 -- scripts/common.sh@337 -- # read -ra ver2 00:02:50.597 17:52:13 -- scripts/common.sh@338 -- # local 'op=<' 00:02:50.597 17:52:13 -- scripts/common.sh@340 -- # ver1_l=2 00:02:50.597 17:52:13 -- scripts/common.sh@341 -- # ver2_l=1 00:02:50.597 17:52:13 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:50.597 17:52:13 -- scripts/common.sh@344 -- # case "$op" in 00:02:50.597 17:52:13 -- scripts/common.sh@345 -- # : 1 00:02:50.597 17:52:13 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:50.597 17:52:13 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:50.597 17:52:13 -- scripts/common.sh@365 -- # decimal 1 00:02:50.597 17:52:13 -- scripts/common.sh@353 -- # local d=1 00:02:50.597 17:52:13 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:50.597 17:52:13 -- scripts/common.sh@355 -- # echo 1 00:02:50.597 17:52:13 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:50.597 17:52:13 -- scripts/common.sh@366 -- # decimal 2 00:02:50.597 17:52:13 -- scripts/common.sh@353 -- # local d=2 00:02:50.597 17:52:13 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:50.597 17:52:13 -- scripts/common.sh@355 -- # echo 2 00:02:50.597 17:52:13 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:50.597 17:52:13 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:50.597 17:52:13 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:50.597 17:52:13 -- scripts/common.sh@368 -- # return 0 00:02:50.597 17:52:13 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:50.597 17:52:13 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:50.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.597 --rc genhtml_branch_coverage=1 00:02:50.597 --rc genhtml_function_coverage=1 00:02:50.597 --rc genhtml_legend=1 00:02:50.597 --rc geninfo_all_blocks=1 00:02:50.597 --rc geninfo_unexecuted_blocks=1 00:02:50.597 00:02:50.597 ' 00:02:50.597 17:52:13 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:50.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.597 --rc genhtml_branch_coverage=1 00:02:50.597 --rc genhtml_function_coverage=1 00:02:50.597 --rc genhtml_legend=1 00:02:50.597 --rc geninfo_all_blocks=1 00:02:50.597 --rc geninfo_unexecuted_blocks=1 00:02:50.597 00:02:50.597 ' 00:02:50.597 17:52:13 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:50.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.597 --rc genhtml_branch_coverage=1 00:02:50.597 --rc genhtml_function_coverage=1 00:02:50.597 --rc genhtml_legend=1 00:02:50.597 --rc geninfo_all_blocks=1 00:02:50.597 --rc geninfo_unexecuted_blocks=1 00:02:50.597 00:02:50.597 ' 00:02:50.597 17:52:13 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:50.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.597 --rc genhtml_branch_coverage=1 00:02:50.597 --rc genhtml_function_coverage=1 00:02:50.597 --rc genhtml_legend=1 00:02:50.597 --rc geninfo_all_blocks=1 00:02:50.597 --rc geninfo_unexecuted_blocks=1 00:02:50.597 00:02:50.597 ' 00:02:50.597 17:52:13 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:50.597 17:52:13 -- nvmf/common.sh@7 -- # uname -s 00:02:50.597 17:52:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:50.597 17:52:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:50.597 17:52:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:50.597 17:52:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:50.597 17:52:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:50.597 17:52:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:50.597 17:52:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:50.597 17:52:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:50.597 17:52:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:50.598 17:52:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:50.598 17:52:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:50.598 17:52:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:50.598 17:52:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:50.598 17:52:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:50.598 17:52:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:50.598 17:52:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:50.598 17:52:13 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:50.598 17:52:13 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:50.598 17:52:13 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:50.598 17:52:13 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:50.598 17:52:13 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:50.598 17:52:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.598 17:52:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.598 17:52:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.598 17:52:13 -- paths/export.sh@5 -- # export PATH 00:02:50.598 17:52:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.598 17:52:13 -- nvmf/common.sh@51 -- # : 0 00:02:50.598 17:52:13 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:50.598 17:52:13 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:50.598 17:52:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:50.598 17:52:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:50.598 17:52:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:50.598 17:52:13 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:50.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:50.598 17:52:13 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:50.598 17:52:13 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:50.598 17:52:13 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:50.598 17:52:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:50.598 17:52:13 -- spdk/autotest.sh@32 -- # uname -s 00:02:50.598 17:52:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:50.598 17:52:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:50.598 17:52:13 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:50.598 17:52:13 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:50.598 17:52:13 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:50.598 17:52:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:50.598 17:52:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:50.598 17:52:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:50.598 17:52:13 -- spdk/autotest.sh@48 -- # udevadm_pid=1329028 00:02:50.598 17:52:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:50.598 17:52:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:50.598 17:52:13 -- pm/common@17 -- # local monitor 00:02:50.598 17:52:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.598 17:52:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.598 17:52:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.598 17:52:13 -- pm/common@21 -- # date +%s 00:02:50.598 17:52:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.598 17:52:13 -- pm/common@21 -- # date +%s 00:02:50.598 17:52:13 -- pm/common@25 -- # sleep 1 00:02:50.598 17:52:13 -- pm/common@21 -- # date +%s 00:02:50.598 17:52:13 -- pm/common@21 -- # date +%s 00:02:50.598 17:52:13 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733763133 00:02:50.598 17:52:13 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733763133 00:02:50.598 17:52:13 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733763133 00:02:50.598 17:52:13 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733763133 00:02:50.598 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733763133_collect-cpu-load.pm.log 00:02:50.598 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733763133_collect-vmstat.pm.log 00:02:50.598 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733763133_collect-cpu-temp.pm.log 00:02:50.598 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733763133_collect-bmc-pm.bmc.pm.log 00:02:51.977 17:52:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:51.977 17:52:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:51.977 17:52:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:51.977 17:52:14 -- common/autotest_common.sh@10 -- # set +x 00:02:51.977 17:52:14 -- spdk/autotest.sh@59 -- # create_test_list 00:02:51.977 17:52:14 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:51.977 17:52:14 -- common/autotest_common.sh@10 -- # set +x 00:02:51.977 17:52:14 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:51.977 17:52:14 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:51.977 17:52:14 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:51.977 17:52:14 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:51.977 17:52:14 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:51.977 17:52:14 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:51.977 17:52:14 -- common/autotest_common.sh@1457 -- # uname 00:02:51.977 17:52:14 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:51.977 17:52:14 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:51.977 17:52:14 -- common/autotest_common.sh@1477 -- # uname 00:02:51.977 17:52:14 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:51.977 17:52:14 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:51.977 17:52:14 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:51.977 lcov: LCOV version 1.15 00:02:51.977 17:52:14 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:10.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:10.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:31.969 17:52:52 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:31.969 17:52:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:31.969 17:52:52 -- common/autotest_common.sh@10 -- # set +x 00:03:31.969 17:52:52 -- spdk/autotest.sh@78 -- # rm -f 00:03:31.969 17:52:52 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.969 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:31.969 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:31.969 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:31.969 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:31.969 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:31.969 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:31.969 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:31.969 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:31.969 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:31.969 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:31.969 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:31.969 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:31.969 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:31.969 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:31.969 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:31.969 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:31.969 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:31.969 17:52:53 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:31.969 17:52:53 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:31.969 17:52:53 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:31.969 17:52:53 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:31.969 17:52:53 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:31.969 17:52:53 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:31.969 17:52:53 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:31.969 17:52:53 -- common/autotest_common.sh@1669 -- # bdf=0000:88:00.0 00:03:31.969 17:52:53 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:31.969 17:52:53 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:31.969 17:52:53 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:31.969 17:52:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:31.969 17:52:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:31.969 17:52:53 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:31.969 17:52:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:31.969 17:52:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:31.969 17:52:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:31.969 17:52:53 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:31.969 17:52:53 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:31.969 No valid GPT data, bailing 00:03:31.969 17:52:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:31.969 17:52:54 -- scripts/common.sh@394 -- # pt= 00:03:31.969 17:52:54 -- scripts/common.sh@395 -- # return 1 00:03:31.969 17:52:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:31.969 1+0 records in 00:03:31.969 1+0 records out 00:03:31.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00232176 s, 452 MB/s 00:03:31.969 17:52:54 -- spdk/autotest.sh@105 -- # sync 00:03:31.969 17:52:54 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:31.969 17:52:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:31.969 17:52:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:33.343 17:52:56 -- spdk/autotest.sh@111 -- # uname -s 00:03:33.343 17:52:56 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:33.343 17:52:56 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:33.343 17:52:56 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:34.278 Hugepages 00:03:34.278 node hugesize free / total 00:03:34.278 node0 1048576kB 0 / 0 00:03:34.278 node0 2048kB 0 / 0 00:03:34.278 node1 1048576kB 0 / 0 00:03:34.278 node1 2048kB 0 / 0 00:03:34.278 00:03:34.278 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:34.278 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:34.536 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:34.536 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:34.536 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:34.536 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:34.536 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:34.536 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:34.536 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:34.536 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:34.536 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:34.536 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:34.536 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:34.536 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:34.536 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:34.536 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:34.536 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:34.536 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:34.536 17:52:57 -- spdk/autotest.sh@117 -- # uname -s 00:03:34.536 17:52:57 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:34.536 17:52:57 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:34.536 17:52:57 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:35.909 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:35.909 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:35.909 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:35.909 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:35.909 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:35.909 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:35.909 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:35.909 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:35.909 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:35.909 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:35.909 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:35.909 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:35.909 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:35.909 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:35.909 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:35.909 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:36.847 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:36.847 17:52:59 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:37.785 17:53:00 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:37.785 17:53:00 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:37.785 17:53:00 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:37.785 17:53:00 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:37.785 17:53:00 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:37.785 17:53:00 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:37.785 17:53:00 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:37.785 17:53:00 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:37.785 17:53:00 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:38.043 17:53:00 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:38.043 17:53:00 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:38.043 17:53:00 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.979 Waiting for block devices as requested 00:03:39.237 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:39.238 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:39.496 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:39.496 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:39.496 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:39.496 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:39.756 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:39.756 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:39.756 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:39.756 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:40.015 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:40.015 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:40.015 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:40.275 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:40.275 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:40.275 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:40.275 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:40.534 17:53:03 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:40.534 17:53:03 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:40.534 17:53:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:40.534 17:53:03 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:03:40.534 17:53:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:40.534 17:53:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:40.534 17:53:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:40.534 17:53:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:40.534 17:53:03 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:40.534 17:53:03 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:40.534 17:53:03 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:40.534 17:53:03 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:40.534 17:53:03 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:40.534 17:53:03 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:40.534 17:53:03 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:40.534 17:53:03 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:40.534 17:53:03 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:40.534 17:53:03 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:40.534 17:53:03 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:40.534 17:53:03 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:40.534 17:53:03 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:40.534 17:53:03 -- common/autotest_common.sh@1543 -- # continue 00:03:40.534 17:53:03 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:40.534 17:53:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:40.534 17:53:03 -- common/autotest_common.sh@10 -- # set +x 00:03:40.534 17:53:03 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:40.534 17:53:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:40.534 17:53:03 -- common/autotest_common.sh@10 -- # set +x 00:03:40.534 17:53:03 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:41.911 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:41.911 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:41.911 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:41.911 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:41.911 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:41.911 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:41.911 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:41.911 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:41.911 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:41.911 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:41.911 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:41.911 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:41.911 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:41.911 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:41.911 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:41.911 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:42.849 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:43.108 17:53:05 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:43.108 17:53:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:43.108 17:53:05 -- common/autotest_common.sh@10 -- # set +x 00:03:43.108 17:53:05 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:43.108 17:53:05 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:43.108 17:53:05 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:43.108 17:53:05 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:43.108 17:53:05 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:43.108 17:53:05 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:43.108 17:53:05 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:43.108 17:53:05 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:43.108 17:53:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:43.108 17:53:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:43.108 17:53:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:43.108 17:53:05 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:43.108 17:53:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:43.108 17:53:06 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:43.108 17:53:06 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:43.108 17:53:06 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:43.108 17:53:06 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:43.108 17:53:06 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:43.108 17:53:06 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:43.108 17:53:06 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:43.108 17:53:06 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:43.108 17:53:06 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:03:43.108 17:53:06 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:03:43.108 17:53:06 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1339529 00:03:43.108 17:53:06 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.108 17:53:06 -- common/autotest_common.sh@1585 -- # waitforlisten 1339529 00:03:43.108 17:53:06 -- common/autotest_common.sh@835 -- # '[' -z 1339529 ']' 00:03:43.108 17:53:06 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:43.108 17:53:06 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:43.108 17:53:06 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:43.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:43.108 17:53:06 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:43.108 17:53:06 -- common/autotest_common.sh@10 -- # set +x 00:03:43.108 [2024-12-09 17:53:06.085963] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:03:43.108 [2024-12-09 17:53:06.086047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1339529 ] 00:03:43.367 [2024-12-09 17:53:06.152583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:43.367 [2024-12-09 17:53:06.213775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:43.625 17:53:06 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:43.625 17:53:06 -- common/autotest_common.sh@868 -- # return 0 00:03:43.625 17:53:06 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:43.625 17:53:06 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:43.625 17:53:06 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:46.917 nvme0n1 00:03:46.917 17:53:09 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:46.917 [2024-12-09 17:53:09.819020] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 1 00:03:46.917 [2024-12-09 17:53:09.819060] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 1 00:03:46.917 request: 00:03:46.917 { 00:03:46.917 "nvme_ctrlr_name": "nvme0", 00:03:46.917 "password": "test", 00:03:46.917 "method": "bdev_nvme_opal_revert", 00:03:46.917 "req_id": 1 00:03:46.917 } 00:03:46.917 Got JSON-RPC error response 00:03:46.917 response: 00:03:46.917 { 00:03:46.917 "code": -32603, 00:03:46.917 "message": "Internal error" 00:03:46.917 } 00:03:46.917 17:53:09 -- common/autotest_common.sh@1591 -- # true 00:03:46.917 17:53:09 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:46.917 17:53:09 -- common/autotest_common.sh@1595 -- # killprocess 1339529 00:03:46.917 17:53:09 -- common/autotest_common.sh@954 -- # '[' -z 1339529 ']' 00:03:46.917 17:53:09 -- common/autotest_common.sh@958 -- # kill -0 1339529 00:03:46.917 17:53:09 -- common/autotest_common.sh@959 -- # uname 00:03:46.917 17:53:09 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:46.917 17:53:09 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1339529 00:03:46.917 17:53:09 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:46.917 17:53:09 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:46.917 17:53:09 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1339529' 00:03:46.917 killing process with pid 1339529 00:03:46.917 17:53:09 -- common/autotest_common.sh@973 -- # kill 1339529 00:03:46.917 17:53:09 -- common/autotest_common.sh@978 -- # wait 1339529 00:03:48.915 17:53:11 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:48.915 17:53:11 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:48.915 17:53:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:48.915 17:53:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:48.915 17:53:11 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:48.915 17:53:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:48.915 17:53:11 -- common/autotest_common.sh@10 -- # set +x 00:03:48.915 17:53:11 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:48.915 17:53:11 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:48.915 17:53:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.915 17:53:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.915 17:53:11 -- common/autotest_common.sh@10 -- # set +x 00:03:48.915 ************************************ 00:03:48.915 START TEST env 00:03:48.915 ************************************ 00:03:48.915 17:53:11 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:48.915 * Looking for test storage... 00:03:48.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:48.915 17:53:11 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:48.915 17:53:11 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:48.915 17:53:11 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:48.915 17:53:11 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:48.915 17:53:11 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:48.915 17:53:11 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:48.915 17:53:11 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:48.915 17:53:11 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.915 17:53:11 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:48.915 17:53:11 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:48.915 17:53:11 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:48.915 17:53:11 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:48.915 17:53:11 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:48.915 17:53:11 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:48.915 17:53:11 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:48.915 17:53:11 env -- scripts/common.sh@344 -- # case "$op" in 00:03:48.915 17:53:11 env -- scripts/common.sh@345 -- # : 1 00:03:48.915 17:53:11 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:48.915 17:53:11 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.915 17:53:11 env -- scripts/common.sh@365 -- # decimal 1 00:03:48.915 17:53:11 env -- scripts/common.sh@353 -- # local d=1 00:03:48.915 17:53:11 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.916 17:53:11 env -- scripts/common.sh@355 -- # echo 1 00:03:48.916 17:53:11 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:48.916 17:53:11 env -- scripts/common.sh@366 -- # decimal 2 00:03:48.916 17:53:11 env -- scripts/common.sh@353 -- # local d=2 00:03:48.916 17:53:11 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.916 17:53:11 env -- scripts/common.sh@355 -- # echo 2 00:03:48.916 17:53:11 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:48.916 17:53:11 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:48.916 17:53:11 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:48.916 17:53:11 env -- scripts/common.sh@368 -- # return 0 00:03:48.916 17:53:11 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.916 17:53:11 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:48.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.916 --rc genhtml_branch_coverage=1 00:03:48.916 --rc genhtml_function_coverage=1 00:03:48.916 --rc genhtml_legend=1 00:03:48.916 --rc geninfo_all_blocks=1 00:03:48.916 --rc geninfo_unexecuted_blocks=1 00:03:48.916 00:03:48.916 ' 00:03:48.916 17:53:11 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:48.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.916 --rc genhtml_branch_coverage=1 00:03:48.916 --rc genhtml_function_coverage=1 00:03:48.916 --rc genhtml_legend=1 00:03:48.916 --rc geninfo_all_blocks=1 00:03:48.916 --rc geninfo_unexecuted_blocks=1 00:03:48.916 00:03:48.916 ' 00:03:48.916 17:53:11 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:48.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.916 --rc genhtml_branch_coverage=1 00:03:48.916 --rc genhtml_function_coverage=1 00:03:48.916 --rc genhtml_legend=1 00:03:48.916 --rc geninfo_all_blocks=1 00:03:48.916 --rc geninfo_unexecuted_blocks=1 00:03:48.916 00:03:48.916 ' 00:03:48.916 17:53:11 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:48.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.916 --rc genhtml_branch_coverage=1 00:03:48.916 --rc genhtml_function_coverage=1 00:03:48.916 --rc genhtml_legend=1 00:03:48.916 --rc geninfo_all_blocks=1 00:03:48.916 --rc geninfo_unexecuted_blocks=1 00:03:48.916 00:03:48.916 ' 00:03:48.916 17:53:11 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:48.916 17:53:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.916 17:53:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.916 17:53:11 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.916 ************************************ 00:03:48.916 START TEST env_memory 00:03:48.916 ************************************ 00:03:48.916 17:53:11 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:48.916 00:03:48.916 00:03:48.916 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.916 http://cunit.sourceforge.net/ 00:03:48.916 00:03:48.916 00:03:48.916 Suite: memory 00:03:48.916 Test: alloc and free memory map ...[2024-12-09 17:53:11.905201] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:48.916 passed 00:03:48.916 Test: mem map translation ...[2024-12-09 17:53:11.925459] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:48.916 [2024-12-09 17:53:11.925483] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:48.916 [2024-12-09 17:53:11.925527] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:48.916 [2024-12-09 17:53:11.925550] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:49.175 passed 00:03:49.175 Test: mem map registration ...[2024-12-09 17:53:11.967200] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:49.175 [2024-12-09 17:53:11.967221] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:49.175 passed 00:03:49.175 Test: mem map adjacent registrations ...passed 00:03:49.175 00:03:49.175 Run Summary: Type Total Ran Passed Failed Inactive 00:03:49.175 suites 1 1 n/a 0 0 00:03:49.175 tests 4 4 4 0 0 00:03:49.175 asserts 152 152 152 0 n/a 00:03:49.175 00:03:49.175 Elapsed time = 0.140 seconds 00:03:49.175 00:03:49.175 real 0m0.147s 00:03:49.175 user 0m0.141s 00:03:49.175 sys 0m0.005s 00:03:49.175 17:53:12 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.175 17:53:12 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:49.175 ************************************ 00:03:49.175 END TEST env_memory 00:03:49.175 ************************************ 00:03:49.175 17:53:12 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:49.175 17:53:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.175 17:53:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.175 17:53:12 env -- common/autotest_common.sh@10 -- # set +x 00:03:49.175 ************************************ 00:03:49.175 START TEST env_vtophys 00:03:49.175 ************************************ 00:03:49.175 17:53:12 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:49.175 EAL: lib.eal log level changed from notice to debug 00:03:49.175 EAL: Detected lcore 0 as core 0 on socket 0 00:03:49.175 EAL: Detected lcore 1 as core 1 on socket 0 00:03:49.175 EAL: Detected lcore 2 as core 2 on socket 0 00:03:49.175 EAL: Detected lcore 3 as core 3 on socket 0 00:03:49.175 EAL: Detected lcore 4 as core 4 on socket 0 00:03:49.175 EAL: Detected lcore 5 as core 5 on socket 0 00:03:49.175 EAL: Detected lcore 6 as core 8 on socket 0 00:03:49.175 EAL: Detected lcore 7 as core 9 on socket 0 00:03:49.175 EAL: Detected lcore 8 as core 10 on socket 0 00:03:49.175 EAL: Detected lcore 9 as core 11 on socket 0 00:03:49.175 EAL: Detected lcore 10 as core 12 on socket 0 00:03:49.175 EAL: Detected lcore 11 as core 13 on socket 0 00:03:49.175 EAL: Detected lcore 12 as core 0 on socket 1 00:03:49.175 EAL: Detected lcore 13 as core 1 on socket 1 00:03:49.175 EAL: Detected lcore 14 as core 2 on socket 1 00:03:49.175 EAL: Detected lcore 15 as core 3 on socket 1 00:03:49.175 EAL: Detected lcore 16 as core 4 on socket 1 00:03:49.175 EAL: Detected lcore 17 as core 5 on socket 1 00:03:49.175 EAL: Detected lcore 18 as core 8 on socket 1 00:03:49.175 EAL: Detected lcore 19 as core 9 on socket 1 00:03:49.175 EAL: Detected lcore 20 as core 10 on socket 1 00:03:49.175 EAL: Detected lcore 21 as core 11 on socket 1 00:03:49.175 EAL: Detected lcore 22 as core 12 on socket 1 00:03:49.175 EAL: Detected lcore 23 as core 13 on socket 1 00:03:49.175 EAL: Detected lcore 24 as core 0 on socket 0 00:03:49.175 EAL: Detected lcore 25 as core 1 on socket 0 00:03:49.175 EAL: Detected lcore 26 as core 2 on socket 0 00:03:49.175 EAL: Detected lcore 27 as core 3 on socket 0 00:03:49.175 EAL: Detected lcore 28 as core 4 on socket 0 00:03:49.175 EAL: Detected lcore 29 as core 5 on socket 0 00:03:49.175 EAL: Detected lcore 30 as core 8 on socket 0 00:03:49.175 EAL: Detected lcore 31 as core 9 on socket 0 00:03:49.175 EAL: Detected lcore 32 as core 10 on socket 0 00:03:49.175 EAL: Detected lcore 33 as core 11 on socket 0 00:03:49.175 EAL: Detected lcore 34 as core 12 on socket 0 00:03:49.175 EAL: Detected lcore 35 as core 13 on socket 0 00:03:49.175 EAL: Detected lcore 36 as core 0 on socket 1 00:03:49.175 EAL: Detected lcore 37 as core 1 on socket 1 00:03:49.175 EAL: Detected lcore 38 as core 2 on socket 1 00:03:49.175 EAL: Detected lcore 39 as core 3 on socket 1 00:03:49.175 EAL: Detected lcore 40 as core 4 on socket 1 00:03:49.175 EAL: Detected lcore 41 as core 5 on socket 1 00:03:49.175 EAL: Detected lcore 42 as core 8 on socket 1 00:03:49.175 EAL: Detected lcore 43 as core 9 on socket 1 00:03:49.175 EAL: Detected lcore 44 as core 10 on socket 1 00:03:49.175 EAL: Detected lcore 45 as core 11 on socket 1 00:03:49.175 EAL: Detected lcore 46 as core 12 on socket 1 00:03:49.175 EAL: Detected lcore 47 as core 13 on socket 1 00:03:49.175 EAL: Maximum logical cores by configuration: 128 00:03:49.175 EAL: Detected CPU lcores: 48 00:03:49.175 EAL: Detected NUMA nodes: 2 00:03:49.175 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:49.175 EAL: Detected shared linkage of DPDK 00:03:49.175 EAL: No shared files mode enabled, IPC will be disabled 00:03:49.175 EAL: Bus pci wants IOVA as 'DC' 00:03:49.175 EAL: Buses did not request a specific IOVA mode. 00:03:49.175 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:49.175 EAL: Selected IOVA mode 'VA' 00:03:49.175 EAL: Probing VFIO support... 00:03:49.175 EAL: IOMMU type 1 (Type 1) is supported 00:03:49.175 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:49.175 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:49.175 EAL: VFIO support initialized 00:03:49.175 EAL: Ask a virtual area of 0x2e000 bytes 00:03:49.175 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:49.175 EAL: Setting up physically contiguous memory... 00:03:49.175 EAL: Setting maximum number of open files to 524288 00:03:49.175 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:49.175 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:49.175 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:49.175 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.175 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:49.175 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:49.175 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.175 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:49.175 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:49.175 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.175 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:49.175 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:49.175 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.176 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:49.176 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:49.176 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.176 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:49.176 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:49.176 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.176 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:49.176 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:49.176 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.176 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:49.176 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:49.176 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.176 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:49.176 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:49.176 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:49.176 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.176 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:49.176 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:49.176 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.176 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:49.176 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:49.176 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.176 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:49.176 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:49.176 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.176 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:49.176 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:49.176 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.176 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:49.176 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:49.176 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.176 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:49.176 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:49.176 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.176 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:49.176 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:49.176 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.176 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:49.176 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:49.176 EAL: Hugepages will be freed exactly as allocated. 00:03:49.176 EAL: No shared files mode enabled, IPC is disabled 00:03:49.176 EAL: No shared files mode enabled, IPC is disabled 00:03:49.176 EAL: TSC frequency is ~2700000 KHz 00:03:49.176 EAL: Main lcore 0 is ready (tid=7f297ac02a00;cpuset=[0]) 00:03:49.176 EAL: Trying to obtain current memory policy. 00:03:49.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.176 EAL: Restoring previous memory policy: 0 00:03:49.176 EAL: request: mp_malloc_sync 00:03:49.176 EAL: No shared files mode enabled, IPC is disabled 00:03:49.176 EAL: Heap on socket 0 was expanded by 2MB 00:03:49.176 EAL: No shared files mode enabled, IPC is disabled 00:03:49.176 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:49.176 EAL: Mem event callback 'spdk:(nil)' registered 00:03:49.176 00:03:49.176 00:03:49.176 CUnit - A unit testing framework for C - Version 2.1-3 00:03:49.176 http://cunit.sourceforge.net/ 00:03:49.176 00:03:49.176 00:03:49.176 Suite: components_suite 00:03:49.176 Test: vtophys_malloc_test ...passed 00:03:49.176 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:49.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.176 EAL: Restoring previous memory policy: 4 00:03:49.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.176 EAL: request: mp_malloc_sync 00:03:49.176 EAL: No shared files mode enabled, IPC is disabled 00:03:49.176 EAL: Heap on socket 0 was expanded by 4MB 00:03:49.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.176 EAL: request: mp_malloc_sync 00:03:49.176 EAL: No shared files mode enabled, IPC is disabled 00:03:49.176 EAL: Heap on socket 0 was shrunk by 4MB 00:03:49.176 EAL: Trying to obtain current memory policy. 00:03:49.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.176 EAL: Restoring previous memory policy: 4 00:03:49.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.176 EAL: request: mp_malloc_sync 00:03:49.176 EAL: No shared files mode enabled, IPC is disabled 00:03:49.176 EAL: Heap on socket 0 was expanded by 6MB 00:03:49.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.176 EAL: request: mp_malloc_sync 00:03:49.176 EAL: No shared files mode enabled, IPC is disabled 00:03:49.176 EAL: Heap on socket 0 was shrunk by 6MB 00:03:49.176 EAL: Trying to obtain current memory policy. 00:03:49.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.176 EAL: Restoring previous memory policy: 4 00:03:49.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.176 EAL: request: mp_malloc_sync 00:03:49.176 EAL: No shared files mode enabled, IPC is disabled 00:03:49.176 EAL: Heap on socket 0 was expanded by 10MB 00:03:49.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.176 EAL: request: mp_malloc_sync 00:03:49.176 EAL: No shared files mode enabled, IPC is disabled 00:03:49.176 EAL: Heap on socket 0 was shrunk by 10MB 00:03:49.176 EAL: Trying to obtain current memory policy. 00:03:49.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.176 EAL: Restoring previous memory policy: 4 00:03:49.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.176 EAL: request: mp_malloc_sync 00:03:49.176 EAL: No shared files mode enabled, IPC is disabled 00:03:49.176 EAL: Heap on socket 0 was expanded by 18MB 00:03:49.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.176 EAL: request: mp_malloc_sync 00:03:49.176 EAL: No shared files mode enabled, IPC is disabled 00:03:49.176 EAL: Heap on socket 0 was shrunk by 18MB 00:03:49.176 EAL: Trying to obtain current memory policy. 00:03:49.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.176 EAL: Restoring previous memory policy: 4 00:03:49.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.176 EAL: request: mp_malloc_sync 00:03:49.176 EAL: No shared files mode enabled, IPC is disabled 00:03:49.176 EAL: Heap on socket 0 was expanded by 34MB 00:03:49.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.176 EAL: request: mp_malloc_sync 00:03:49.176 EAL: No shared files mode enabled, IPC is disabled 00:03:49.176 EAL: Heap on socket 0 was shrunk by 34MB 00:03:49.176 EAL: Trying to obtain current memory policy. 00:03:49.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.176 EAL: Restoring previous memory policy: 4 00:03:49.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.176 EAL: request: mp_malloc_sync 00:03:49.176 EAL: No shared files mode enabled, IPC is disabled 00:03:49.176 EAL: Heap on socket 0 was expanded by 66MB 00:03:49.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.434 EAL: request: mp_malloc_sync 00:03:49.434 EAL: No shared files mode enabled, IPC is disabled 00:03:49.434 EAL: Heap on socket 0 was shrunk by 66MB 00:03:49.434 EAL: Trying to obtain current memory policy. 00:03:49.434 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.434 EAL: Restoring previous memory policy: 4 00:03:49.434 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.434 EAL: request: mp_malloc_sync 00:03:49.434 EAL: No shared files mode enabled, IPC is disabled 00:03:49.434 EAL: Heap on socket 0 was expanded by 130MB 00:03:49.434 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.434 EAL: request: mp_malloc_sync 00:03:49.434 EAL: No shared files mode enabled, IPC is disabled 00:03:49.434 EAL: Heap on socket 0 was shrunk by 130MB 00:03:49.434 EAL: Trying to obtain current memory policy. 00:03:49.434 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.434 EAL: Restoring previous memory policy: 4 00:03:49.434 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.434 EAL: request: mp_malloc_sync 00:03:49.434 EAL: No shared files mode enabled, IPC is disabled 00:03:49.434 EAL: Heap on socket 0 was expanded by 258MB 00:03:49.434 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.434 EAL: request: mp_malloc_sync 00:03:49.434 EAL: No shared files mode enabled, IPC is disabled 00:03:49.434 EAL: Heap on socket 0 was shrunk by 258MB 00:03:49.434 EAL: Trying to obtain current memory policy. 00:03:49.434 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.692 EAL: Restoring previous memory policy: 4 00:03:49.692 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.692 EAL: request: mp_malloc_sync 00:03:49.692 EAL: No shared files mode enabled, IPC is disabled 00:03:49.692 EAL: Heap on socket 0 was expanded by 514MB 00:03:49.692 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.950 EAL: request: mp_malloc_sync 00:03:49.951 EAL: No shared files mode enabled, IPC is disabled 00:03:49.951 EAL: Heap on socket 0 was shrunk by 514MB 00:03:49.951 EAL: Trying to obtain current memory policy. 00:03:49.951 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.209 EAL: Restoring previous memory policy: 4 00:03:50.209 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.209 EAL: request: mp_malloc_sync 00:03:50.209 EAL: No shared files mode enabled, IPC is disabled 00:03:50.209 EAL: Heap on socket 0 was expanded by 1026MB 00:03:50.466 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.724 EAL: request: mp_malloc_sync 00:03:50.724 EAL: No shared files mode enabled, IPC is disabled 00:03:50.724 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:50.724 passed 00:03:50.724 00:03:50.724 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.724 suites 1 1 n/a 0 0 00:03:50.724 tests 2 2 2 0 0 00:03:50.724 asserts 497 497 497 0 n/a 00:03:50.724 00:03:50.724 Elapsed time = 1.335 seconds 00:03:50.724 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.724 EAL: request: mp_malloc_sync 00:03:50.724 EAL: No shared files mode enabled, IPC is disabled 00:03:50.724 EAL: Heap on socket 0 was shrunk by 2MB 00:03:50.724 EAL: No shared files mode enabled, IPC is disabled 00:03:50.724 EAL: No shared files mode enabled, IPC is disabled 00:03:50.724 EAL: No shared files mode enabled, IPC is disabled 00:03:50.724 00:03:50.724 real 0m1.450s 00:03:50.724 user 0m0.856s 00:03:50.724 sys 0m0.564s 00:03:50.724 17:53:13 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.724 17:53:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:50.724 ************************************ 00:03:50.724 END TEST env_vtophys 00:03:50.724 ************************************ 00:03:50.724 17:53:13 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:50.724 17:53:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.724 17:53:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.724 17:53:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.724 ************************************ 00:03:50.724 START TEST env_pci 00:03:50.724 ************************************ 00:03:50.724 17:53:13 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:50.724 00:03:50.724 00:03:50.724 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.724 http://cunit.sourceforge.net/ 00:03:50.724 00:03:50.724 00:03:50.724 Suite: pci 00:03:50.724 Test: pci_hook ...[2024-12-09 17:53:13.581864] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1340437 has claimed it 00:03:50.724 EAL: Cannot find device (10000:00:01.0) 00:03:50.724 EAL: Failed to attach device on primary process 00:03:50.725 passed 00:03:50.725 00:03:50.725 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.725 suites 1 1 n/a 0 0 00:03:50.725 tests 1 1 1 0 0 00:03:50.725 asserts 25 25 25 0 n/a 00:03:50.725 00:03:50.725 Elapsed time = 0.022 seconds 00:03:50.725 00:03:50.725 real 0m0.034s 00:03:50.725 user 0m0.016s 00:03:50.725 sys 0m0.018s 00:03:50.725 17:53:13 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.725 17:53:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:50.725 ************************************ 00:03:50.725 END TEST env_pci 00:03:50.725 ************************************ 00:03:50.725 17:53:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:50.725 17:53:13 env -- env/env.sh@15 -- # uname 00:03:50.725 17:53:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:50.725 17:53:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:50.725 17:53:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:50.725 17:53:13 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:50.725 17:53:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.725 17:53:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.725 ************************************ 00:03:50.725 START TEST env_dpdk_post_init 00:03:50.725 ************************************ 00:03:50.725 17:53:13 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:50.725 EAL: Detected CPU lcores: 48 00:03:50.725 EAL: Detected NUMA nodes: 2 00:03:50.725 EAL: Detected shared linkage of DPDK 00:03:50.725 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:50.725 EAL: Selected IOVA mode 'VA' 00:03:50.725 EAL: VFIO support initialized 00:03:50.725 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:50.725 EAL: Using IOMMU type 1 (Type 1) 00:03:50.985 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:50.985 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:50.985 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:50.985 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:50.985 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:50.985 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:50.985 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:50.985 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:50.985 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:50.985 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:50.985 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:50.985 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:50.985 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:50.985 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:50.985 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:50.985 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:51.923 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:55.208 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:55.208 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:55.208 Starting DPDK initialization... 00:03:55.208 Starting SPDK post initialization... 00:03:55.208 SPDK NVMe probe 00:03:55.208 Attaching to 0000:88:00.0 00:03:55.208 Attached to 0000:88:00.0 00:03:55.208 Cleaning up... 00:03:55.208 00:03:55.208 real 0m4.420s 00:03:55.208 user 0m3.051s 00:03:55.208 sys 0m0.425s 00:03:55.208 17:53:18 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.208 17:53:18 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:55.208 ************************************ 00:03:55.208 END TEST env_dpdk_post_init 00:03:55.208 ************************************ 00:03:55.208 17:53:18 env -- env/env.sh@26 -- # uname 00:03:55.208 17:53:18 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:55.208 17:53:18 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:55.208 17:53:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:55.208 17:53:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.208 17:53:18 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.208 ************************************ 00:03:55.208 START TEST env_mem_callbacks 00:03:55.208 ************************************ 00:03:55.208 17:53:18 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:55.208 EAL: Detected CPU lcores: 48 00:03:55.208 EAL: Detected NUMA nodes: 2 00:03:55.208 EAL: Detected shared linkage of DPDK 00:03:55.208 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:55.208 EAL: Selected IOVA mode 'VA' 00:03:55.208 EAL: VFIO support initialized 00:03:55.208 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:55.208 00:03:55.208 00:03:55.208 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.208 http://cunit.sourceforge.net/ 00:03:55.208 00:03:55.208 00:03:55.208 Suite: memory 00:03:55.208 Test: test ... 00:03:55.208 register 0x200000200000 2097152 00:03:55.208 malloc 3145728 00:03:55.208 register 0x200000400000 4194304 00:03:55.208 buf 0x200000500000 len 3145728 PASSED 00:03:55.208 malloc 64 00:03:55.208 buf 0x2000004fff40 len 64 PASSED 00:03:55.208 malloc 4194304 00:03:55.208 register 0x200000800000 6291456 00:03:55.208 buf 0x200000a00000 len 4194304 PASSED 00:03:55.208 free 0x200000500000 3145728 00:03:55.208 free 0x2000004fff40 64 00:03:55.208 unregister 0x200000400000 4194304 PASSED 00:03:55.208 free 0x200000a00000 4194304 00:03:55.208 unregister 0x200000800000 6291456 PASSED 00:03:55.208 malloc 8388608 00:03:55.208 register 0x200000400000 10485760 00:03:55.208 buf 0x200000600000 len 8388608 PASSED 00:03:55.208 free 0x200000600000 8388608 00:03:55.208 unregister 0x200000400000 10485760 PASSED 00:03:55.208 passed 00:03:55.208 00:03:55.208 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.208 suites 1 1 n/a 0 0 00:03:55.208 tests 1 1 1 0 0 00:03:55.208 asserts 15 15 15 0 n/a 00:03:55.208 00:03:55.208 Elapsed time = 0.005 seconds 00:03:55.208 00:03:55.208 real 0m0.049s 00:03:55.208 user 0m0.011s 00:03:55.208 sys 0m0.038s 00:03:55.208 17:53:18 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.208 17:53:18 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:55.208 ************************************ 00:03:55.208 END TEST env_mem_callbacks 00:03:55.208 ************************************ 00:03:55.208 00:03:55.208 real 0m6.485s 00:03:55.208 user 0m4.266s 00:03:55.208 sys 0m1.268s 00:03:55.208 17:53:18 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.208 17:53:18 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.208 ************************************ 00:03:55.208 END TEST env 00:03:55.208 ************************************ 00:03:55.208 17:53:18 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:55.208 17:53:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:55.208 17:53:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.208 17:53:18 -- common/autotest_common.sh@10 -- # set +x 00:03:55.468 ************************************ 00:03:55.468 START TEST rpc 00:03:55.468 ************************************ 00:03:55.468 17:53:18 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:55.468 * Looking for test storage... 00:03:55.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:55.468 17:53:18 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:55.468 17:53:18 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:55.468 17:53:18 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:55.468 17:53:18 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:55.468 17:53:18 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:55.468 17:53:18 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:55.468 17:53:18 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:55.468 17:53:18 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:55.468 17:53:18 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:55.468 17:53:18 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:55.468 17:53:18 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:55.468 17:53:18 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:55.468 17:53:18 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:55.468 17:53:18 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:55.468 17:53:18 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:55.468 17:53:18 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:55.468 17:53:18 rpc -- scripts/common.sh@345 -- # : 1 00:03:55.468 17:53:18 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:55.468 17:53:18 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:55.468 17:53:18 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:55.468 17:53:18 rpc -- scripts/common.sh@353 -- # local d=1 00:03:55.468 17:53:18 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:55.468 17:53:18 rpc -- scripts/common.sh@355 -- # echo 1 00:03:55.468 17:53:18 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:55.468 17:53:18 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:55.468 17:53:18 rpc -- scripts/common.sh@353 -- # local d=2 00:03:55.468 17:53:18 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:55.468 17:53:18 rpc -- scripts/common.sh@355 -- # echo 2 00:03:55.468 17:53:18 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:55.468 17:53:18 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:55.468 17:53:18 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:55.468 17:53:18 rpc -- scripts/common.sh@368 -- # return 0 00:03:55.468 17:53:18 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:55.468 17:53:18 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:55.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.468 --rc genhtml_branch_coverage=1 00:03:55.468 --rc genhtml_function_coverage=1 00:03:55.468 --rc genhtml_legend=1 00:03:55.468 --rc geninfo_all_blocks=1 00:03:55.468 --rc geninfo_unexecuted_blocks=1 00:03:55.468 00:03:55.468 ' 00:03:55.468 17:53:18 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:55.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.468 --rc genhtml_branch_coverage=1 00:03:55.468 --rc genhtml_function_coverage=1 00:03:55.468 --rc genhtml_legend=1 00:03:55.468 --rc geninfo_all_blocks=1 00:03:55.468 --rc geninfo_unexecuted_blocks=1 00:03:55.468 00:03:55.468 ' 00:03:55.468 17:53:18 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:55.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.468 --rc genhtml_branch_coverage=1 00:03:55.468 --rc genhtml_function_coverage=1 00:03:55.468 --rc genhtml_legend=1 00:03:55.468 --rc geninfo_all_blocks=1 00:03:55.468 --rc geninfo_unexecuted_blocks=1 00:03:55.468 00:03:55.468 ' 00:03:55.468 17:53:18 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:55.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.468 --rc genhtml_branch_coverage=1 00:03:55.468 --rc genhtml_function_coverage=1 00:03:55.468 --rc genhtml_legend=1 00:03:55.468 --rc geninfo_all_blocks=1 00:03:55.468 --rc geninfo_unexecuted_blocks=1 00:03:55.468 00:03:55.468 ' 00:03:55.468 17:53:18 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1341209 00:03:55.468 17:53:18 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:55.468 17:53:18 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:55.468 17:53:18 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1341209 00:03:55.468 17:53:18 rpc -- common/autotest_common.sh@835 -- # '[' -z 1341209 ']' 00:03:55.468 17:53:18 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:55.468 17:53:18 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:55.468 17:53:18 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:55.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:55.468 17:53:18 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:55.468 17:53:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.468 [2024-12-09 17:53:18.451689] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:03:55.468 [2024-12-09 17:53:18.451775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1341209 ] 00:03:55.726 [2024-12-09 17:53:18.519278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.726 [2024-12-09 17:53:18.574940] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:55.726 [2024-12-09 17:53:18.575002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1341209' to capture a snapshot of events at runtime. 00:03:55.726 [2024-12-09 17:53:18.575030] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:55.726 [2024-12-09 17:53:18.575041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:55.726 [2024-12-09 17:53:18.575051] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1341209 for offline analysis/debug. 00:03:55.726 [2024-12-09 17:53:18.575639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.984 17:53:18 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:55.984 17:53:18 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:55.984 17:53:18 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:55.984 17:53:18 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:55.984 17:53:18 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:55.984 17:53:18 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:55.984 17:53:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:55.984 17:53:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.984 17:53:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.984 ************************************ 00:03:55.984 START TEST rpc_integrity 00:03:55.984 ************************************ 00:03:55.984 17:53:18 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:55.984 17:53:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:55.984 17:53:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:55.984 17:53:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.984 17:53:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:55.984 17:53:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:55.984 17:53:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:55.984 17:53:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:55.984 17:53:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:55.984 17:53:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:55.984 17:53:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.985 17:53:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:55.985 17:53:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:55.985 17:53:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:55.985 17:53:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:55.985 17:53:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.985 17:53:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:55.985 17:53:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:55.985 { 00:03:55.985 "name": "Malloc0", 00:03:55.985 "aliases": [ 00:03:55.985 "036491ae-b038-4236-8756-2d35e3927bbe" 00:03:55.985 ], 00:03:55.985 "product_name": "Malloc disk", 00:03:55.985 "block_size": 512, 00:03:55.985 "num_blocks": 16384, 00:03:55.985 "uuid": "036491ae-b038-4236-8756-2d35e3927bbe", 00:03:55.985 "assigned_rate_limits": { 00:03:55.985 "rw_ios_per_sec": 0, 00:03:55.985 "rw_mbytes_per_sec": 0, 00:03:55.985 "r_mbytes_per_sec": 0, 00:03:55.985 "w_mbytes_per_sec": 0 00:03:55.985 }, 00:03:55.985 "claimed": false, 00:03:55.985 "zoned": false, 00:03:55.985 "supported_io_types": { 00:03:55.985 "read": true, 00:03:55.985 "write": true, 00:03:55.985 "unmap": true, 00:03:55.985 "flush": true, 00:03:55.985 "reset": true, 00:03:55.985 "nvme_admin": false, 00:03:55.985 "nvme_io": false, 00:03:55.985 "nvme_io_md": false, 00:03:55.985 "write_zeroes": true, 00:03:55.985 "zcopy": true, 00:03:55.985 "get_zone_info": false, 00:03:55.985 "zone_management": false, 00:03:55.985 "zone_append": false, 00:03:55.985 "compare": false, 00:03:55.985 "compare_and_write": false, 00:03:55.985 "abort": true, 00:03:55.985 "seek_hole": false, 00:03:55.985 "seek_data": false, 00:03:55.985 "copy": true, 00:03:55.985 "nvme_iov_md": false 00:03:55.985 }, 00:03:55.985 "memory_domains": [ 00:03:55.985 { 00:03:55.985 "dma_device_id": "system", 00:03:55.985 "dma_device_type": 1 00:03:55.985 }, 00:03:55.985 { 00:03:55.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.985 "dma_device_type": 2 00:03:55.985 } 00:03:55.985 ], 00:03:55.985 "driver_specific": {} 00:03:55.985 } 00:03:55.985 ]' 00:03:55.985 17:53:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:55.985 17:53:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:55.985 17:53:18 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:55.985 17:53:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:55.985 17:53:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.985 [2024-12-09 17:53:18.984638] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:55.985 [2024-12-09 17:53:18.984684] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:55.985 [2024-12-09 17:53:18.984707] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18c9020 00:03:55.985 [2024-12-09 17:53:18.984723] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:55.985 [2024-12-09 17:53:18.986073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:55.985 [2024-12-09 17:53:18.986096] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:55.985 Passthru0 00:03:55.985 17:53:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:55.985 17:53:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:55.985 17:53:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:55.985 17:53:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.985 17:53:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:55.985 17:53:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:55.985 { 00:03:55.985 "name": "Malloc0", 00:03:55.985 "aliases": [ 00:03:55.985 "036491ae-b038-4236-8756-2d35e3927bbe" 00:03:55.985 ], 00:03:55.985 "product_name": "Malloc disk", 00:03:55.985 "block_size": 512, 00:03:55.985 "num_blocks": 16384, 00:03:55.985 "uuid": "036491ae-b038-4236-8756-2d35e3927bbe", 00:03:55.985 "assigned_rate_limits": { 00:03:55.985 "rw_ios_per_sec": 0, 00:03:55.985 "rw_mbytes_per_sec": 0, 00:03:55.985 "r_mbytes_per_sec": 0, 00:03:55.985 "w_mbytes_per_sec": 0 00:03:55.985 }, 00:03:55.985 "claimed": true, 00:03:55.985 "claim_type": "exclusive_write", 00:03:55.985 "zoned": false, 00:03:55.985 "supported_io_types": { 00:03:55.985 "read": true, 00:03:55.985 "write": true, 00:03:55.985 "unmap": true, 00:03:55.985 "flush": true, 00:03:55.985 "reset": true, 00:03:55.985 "nvme_admin": false, 00:03:55.985 "nvme_io": false, 00:03:55.985 "nvme_io_md": false, 00:03:55.985 "write_zeroes": true, 00:03:55.985 "zcopy": true, 00:03:55.985 "get_zone_info": false, 00:03:55.985 "zone_management": false, 00:03:55.985 "zone_append": false, 00:03:55.985 "compare": false, 00:03:55.985 "compare_and_write": false, 00:03:55.985 "abort": true, 00:03:55.985 "seek_hole": false, 00:03:55.985 "seek_data": false, 00:03:55.985 "copy": true, 00:03:55.985 "nvme_iov_md": false 00:03:55.985 }, 00:03:55.985 "memory_domains": [ 00:03:55.985 { 00:03:55.985 "dma_device_id": "system", 00:03:55.985 "dma_device_type": 1 00:03:55.985 }, 00:03:55.985 { 00:03:55.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.985 "dma_device_type": 2 00:03:55.985 } 00:03:55.985 ], 00:03:55.985 "driver_specific": {} 00:03:55.985 }, 00:03:55.985 { 00:03:55.985 "name": "Passthru0", 00:03:55.985 "aliases": [ 00:03:55.985 "f810fda5-091d-570a-b775-313139428afd" 00:03:55.985 ], 00:03:55.985 "product_name": "passthru", 00:03:55.985 "block_size": 512, 00:03:55.985 "num_blocks": 16384, 00:03:55.985 "uuid": "f810fda5-091d-570a-b775-313139428afd", 00:03:55.985 "assigned_rate_limits": { 00:03:55.985 "rw_ios_per_sec": 0, 00:03:55.985 "rw_mbytes_per_sec": 0, 00:03:55.985 "r_mbytes_per_sec": 0, 00:03:55.985 "w_mbytes_per_sec": 0 00:03:55.985 }, 00:03:55.985 "claimed": false, 00:03:55.985 "zoned": false, 00:03:55.985 "supported_io_types": { 00:03:55.985 "read": true, 00:03:55.985 "write": true, 00:03:55.985 "unmap": true, 00:03:55.985 "flush": true, 00:03:55.985 "reset": true, 00:03:55.985 "nvme_admin": false, 00:03:55.985 "nvme_io": false, 00:03:55.985 "nvme_io_md": false, 00:03:55.985 "write_zeroes": true, 00:03:55.985 "zcopy": true, 00:03:55.985 "get_zone_info": false, 00:03:55.985 "zone_management": false, 00:03:55.985 "zone_append": false, 00:03:55.985 "compare": false, 00:03:55.985 "compare_and_write": false, 00:03:55.985 "abort": true, 00:03:55.985 "seek_hole": false, 00:03:55.985 "seek_data": false, 00:03:55.985 "copy": true, 00:03:55.985 "nvme_iov_md": false 00:03:55.985 }, 00:03:55.985 "memory_domains": [ 00:03:55.985 { 00:03:55.985 "dma_device_id": "system", 00:03:55.985 "dma_device_type": 1 00:03:55.985 }, 00:03:55.985 { 00:03:55.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.985 "dma_device_type": 2 00:03:55.985 } 00:03:55.985 ], 00:03:55.985 "driver_specific": { 00:03:55.985 "passthru": { 00:03:55.985 "name": "Passthru0", 00:03:55.985 "base_bdev_name": "Malloc0" 00:03:55.985 } 00:03:55.985 } 00:03:55.985 } 00:03:55.985 ]' 00:03:55.985 17:53:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:56.243 17:53:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:56.243 17:53:19 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:56.243 17:53:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.243 17:53:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.243 17:53:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.243 17:53:19 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:56.243 17:53:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.243 17:53:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.243 17:53:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.243 17:53:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:56.243 17:53:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.243 17:53:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.243 17:53:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.243 17:53:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:56.243 17:53:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:56.243 17:53:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:56.243 00:03:56.243 real 0m0.223s 00:03:56.243 user 0m0.144s 00:03:56.243 sys 0m0.020s 00:03:56.243 17:53:19 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.243 17:53:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.243 ************************************ 00:03:56.243 END TEST rpc_integrity 00:03:56.243 ************************************ 00:03:56.243 17:53:19 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:56.243 17:53:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.243 17:53:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.243 17:53:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.243 ************************************ 00:03:56.243 START TEST rpc_plugins 00:03:56.243 ************************************ 00:03:56.243 17:53:19 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:56.243 17:53:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:56.243 17:53:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.243 17:53:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:56.243 17:53:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.243 17:53:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:56.243 17:53:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:56.243 17:53:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.243 17:53:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:56.243 17:53:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.243 17:53:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:56.243 { 00:03:56.243 "name": "Malloc1", 00:03:56.243 "aliases": [ 00:03:56.243 "916ff705-bda9-4b07-837e-f591d4848e0e" 00:03:56.243 ], 00:03:56.243 "product_name": "Malloc disk", 00:03:56.243 "block_size": 4096, 00:03:56.243 "num_blocks": 256, 00:03:56.243 "uuid": "916ff705-bda9-4b07-837e-f591d4848e0e", 00:03:56.243 "assigned_rate_limits": { 00:03:56.243 "rw_ios_per_sec": 0, 00:03:56.243 "rw_mbytes_per_sec": 0, 00:03:56.243 "r_mbytes_per_sec": 0, 00:03:56.243 "w_mbytes_per_sec": 0 00:03:56.243 }, 00:03:56.243 "claimed": false, 00:03:56.243 "zoned": false, 00:03:56.243 "supported_io_types": { 00:03:56.243 "read": true, 00:03:56.243 "write": true, 00:03:56.243 "unmap": true, 00:03:56.243 "flush": true, 00:03:56.243 "reset": true, 00:03:56.243 "nvme_admin": false, 00:03:56.243 "nvme_io": false, 00:03:56.243 "nvme_io_md": false, 00:03:56.243 "write_zeroes": true, 00:03:56.243 "zcopy": true, 00:03:56.243 "get_zone_info": false, 00:03:56.244 "zone_management": false, 00:03:56.244 "zone_append": false, 00:03:56.244 "compare": false, 00:03:56.244 "compare_and_write": false, 00:03:56.244 "abort": true, 00:03:56.244 "seek_hole": false, 00:03:56.244 "seek_data": false, 00:03:56.244 "copy": true, 00:03:56.244 "nvme_iov_md": false 00:03:56.244 }, 00:03:56.244 "memory_domains": [ 00:03:56.244 { 00:03:56.244 "dma_device_id": "system", 00:03:56.244 "dma_device_type": 1 00:03:56.244 }, 00:03:56.244 { 00:03:56.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.244 "dma_device_type": 2 00:03:56.244 } 00:03:56.244 ], 00:03:56.244 "driver_specific": {} 00:03:56.244 } 00:03:56.244 ]' 00:03:56.244 17:53:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:56.244 17:53:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:56.244 17:53:19 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:56.244 17:53:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.244 17:53:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:56.244 17:53:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.244 17:53:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:56.244 17:53:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.244 17:53:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:56.244 17:53:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.244 17:53:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:56.244 17:53:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:56.244 17:53:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:56.244 00:03:56.244 real 0m0.108s 00:03:56.244 user 0m0.066s 00:03:56.244 sys 0m0.010s 00:03:56.244 17:53:19 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.244 17:53:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:56.244 ************************************ 00:03:56.244 END TEST rpc_plugins 00:03:56.244 ************************************ 00:03:56.244 17:53:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:56.244 17:53:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.244 17:53:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.244 17:53:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.502 ************************************ 00:03:56.502 START TEST rpc_trace_cmd_test 00:03:56.502 ************************************ 00:03:56.502 17:53:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:56.502 17:53:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:56.502 17:53:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:56.502 17:53:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.502 17:53:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:56.502 17:53:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.502 17:53:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:56.502 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1341209", 00:03:56.502 "tpoint_group_mask": "0x8", 00:03:56.502 "iscsi_conn": { 00:03:56.502 "mask": "0x2", 00:03:56.502 "tpoint_mask": "0x0" 00:03:56.502 }, 00:03:56.502 "scsi": { 00:03:56.502 "mask": "0x4", 00:03:56.502 "tpoint_mask": "0x0" 00:03:56.502 }, 00:03:56.502 "bdev": { 00:03:56.502 "mask": "0x8", 00:03:56.502 "tpoint_mask": "0xffffffffffffffff" 00:03:56.502 }, 00:03:56.502 "nvmf_rdma": { 00:03:56.502 "mask": "0x10", 00:03:56.502 "tpoint_mask": "0x0" 00:03:56.502 }, 00:03:56.502 "nvmf_tcp": { 00:03:56.502 "mask": "0x20", 00:03:56.502 "tpoint_mask": "0x0" 00:03:56.502 }, 00:03:56.502 "ftl": { 00:03:56.502 "mask": "0x40", 00:03:56.502 "tpoint_mask": "0x0" 00:03:56.502 }, 00:03:56.502 "blobfs": { 00:03:56.502 "mask": "0x80", 00:03:56.502 "tpoint_mask": "0x0" 00:03:56.502 }, 00:03:56.502 "dsa": { 00:03:56.502 "mask": "0x200", 00:03:56.502 "tpoint_mask": "0x0" 00:03:56.502 }, 00:03:56.502 "thread": { 00:03:56.502 "mask": "0x400", 00:03:56.502 "tpoint_mask": "0x0" 00:03:56.502 }, 00:03:56.502 "nvme_pcie": { 00:03:56.502 "mask": "0x800", 00:03:56.502 "tpoint_mask": "0x0" 00:03:56.502 }, 00:03:56.502 "iaa": { 00:03:56.502 "mask": "0x1000", 00:03:56.502 "tpoint_mask": "0x0" 00:03:56.502 }, 00:03:56.502 "nvme_tcp": { 00:03:56.502 "mask": "0x2000", 00:03:56.502 "tpoint_mask": "0x0" 00:03:56.502 }, 00:03:56.502 "bdev_nvme": { 00:03:56.502 "mask": "0x4000", 00:03:56.502 "tpoint_mask": "0x0" 00:03:56.502 }, 00:03:56.502 "sock": { 00:03:56.502 "mask": "0x8000", 00:03:56.502 "tpoint_mask": "0x0" 00:03:56.502 }, 00:03:56.502 "blob": { 00:03:56.502 "mask": "0x10000", 00:03:56.502 "tpoint_mask": "0x0" 00:03:56.502 }, 00:03:56.502 "bdev_raid": { 00:03:56.502 "mask": "0x20000", 00:03:56.502 "tpoint_mask": "0x0" 00:03:56.502 }, 00:03:56.502 "scheduler": { 00:03:56.502 "mask": "0x40000", 00:03:56.502 "tpoint_mask": "0x0" 00:03:56.502 } 00:03:56.502 }' 00:03:56.502 17:53:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:56.502 17:53:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:56.502 17:53:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:56.502 17:53:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:56.502 17:53:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:56.502 17:53:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:56.502 17:53:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:56.502 17:53:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:56.502 17:53:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:56.502 17:53:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:56.502 00:03:56.502 real 0m0.191s 00:03:56.502 user 0m0.165s 00:03:56.502 sys 0m0.016s 00:03:56.502 17:53:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.502 17:53:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:56.502 ************************************ 00:03:56.502 END TEST rpc_trace_cmd_test 00:03:56.502 ************************************ 00:03:56.502 17:53:19 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:56.502 17:53:19 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:56.502 17:53:19 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:56.502 17:53:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.502 17:53:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.502 17:53:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.502 ************************************ 00:03:56.502 START TEST rpc_daemon_integrity 00:03:56.502 ************************************ 00:03:56.502 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:56.502 17:53:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:56.502 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.502 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:56.761 { 00:03:56.761 "name": "Malloc2", 00:03:56.761 "aliases": [ 00:03:56.761 "0dd822a4-1b28-4b7f-8580-dcabdff18eef" 00:03:56.761 ], 00:03:56.761 "product_name": "Malloc disk", 00:03:56.761 "block_size": 512, 00:03:56.761 "num_blocks": 16384, 00:03:56.761 "uuid": "0dd822a4-1b28-4b7f-8580-dcabdff18eef", 00:03:56.761 "assigned_rate_limits": { 00:03:56.761 "rw_ios_per_sec": 0, 00:03:56.761 "rw_mbytes_per_sec": 0, 00:03:56.761 "r_mbytes_per_sec": 0, 00:03:56.761 "w_mbytes_per_sec": 0 00:03:56.761 }, 00:03:56.761 "claimed": false, 00:03:56.761 "zoned": false, 00:03:56.761 "supported_io_types": { 00:03:56.761 "read": true, 00:03:56.761 "write": true, 00:03:56.761 "unmap": true, 00:03:56.761 "flush": true, 00:03:56.761 "reset": true, 00:03:56.761 "nvme_admin": false, 00:03:56.761 "nvme_io": false, 00:03:56.761 "nvme_io_md": false, 00:03:56.761 "write_zeroes": true, 00:03:56.761 "zcopy": true, 00:03:56.761 "get_zone_info": false, 00:03:56.761 "zone_management": false, 00:03:56.761 "zone_append": false, 00:03:56.761 "compare": false, 00:03:56.761 "compare_and_write": false, 00:03:56.761 "abort": true, 00:03:56.761 "seek_hole": false, 00:03:56.761 "seek_data": false, 00:03:56.761 "copy": true, 00:03:56.761 "nvme_iov_md": false 00:03:56.761 }, 00:03:56.761 "memory_domains": [ 00:03:56.761 { 00:03:56.761 "dma_device_id": "system", 00:03:56.761 "dma_device_type": 1 00:03:56.761 }, 00:03:56.761 { 00:03:56.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.761 "dma_device_type": 2 00:03:56.761 } 00:03:56.761 ], 00:03:56.761 "driver_specific": {} 00:03:56.761 } 00:03:56.761 ]' 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.761 [2024-12-09 17:53:19.638589] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:56.761 [2024-12-09 17:53:19.638645] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:56.761 [2024-12-09 17:53:19.638668] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1818320 00:03:56.761 [2024-12-09 17:53:19.638682] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:56.761 [2024-12-09 17:53:19.639932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:56.761 [2024-12-09 17:53:19.639954] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:56.761 Passthru0 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.761 17:53:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:56.761 { 00:03:56.761 "name": "Malloc2", 00:03:56.761 "aliases": [ 00:03:56.761 "0dd822a4-1b28-4b7f-8580-dcabdff18eef" 00:03:56.761 ], 00:03:56.761 "product_name": "Malloc disk", 00:03:56.761 "block_size": 512, 00:03:56.761 "num_blocks": 16384, 00:03:56.761 "uuid": "0dd822a4-1b28-4b7f-8580-dcabdff18eef", 00:03:56.761 "assigned_rate_limits": { 00:03:56.761 "rw_ios_per_sec": 0, 00:03:56.761 "rw_mbytes_per_sec": 0, 00:03:56.761 "r_mbytes_per_sec": 0, 00:03:56.761 "w_mbytes_per_sec": 0 00:03:56.761 }, 00:03:56.761 "claimed": true, 00:03:56.761 "claim_type": "exclusive_write", 00:03:56.761 "zoned": false, 00:03:56.761 "supported_io_types": { 00:03:56.761 "read": true, 00:03:56.761 "write": true, 00:03:56.761 "unmap": true, 00:03:56.761 "flush": true, 00:03:56.761 "reset": true, 00:03:56.761 "nvme_admin": false, 00:03:56.761 "nvme_io": false, 00:03:56.761 "nvme_io_md": false, 00:03:56.761 "write_zeroes": true, 00:03:56.761 "zcopy": true, 00:03:56.761 "get_zone_info": false, 00:03:56.761 "zone_management": false, 00:03:56.761 "zone_append": false, 00:03:56.761 "compare": false, 00:03:56.761 "compare_and_write": false, 00:03:56.761 "abort": true, 00:03:56.761 "seek_hole": false, 00:03:56.761 "seek_data": false, 00:03:56.761 "copy": true, 00:03:56.761 "nvme_iov_md": false 00:03:56.761 }, 00:03:56.761 "memory_domains": [ 00:03:56.761 { 00:03:56.761 "dma_device_id": "system", 00:03:56.761 "dma_device_type": 1 00:03:56.761 }, 00:03:56.761 { 00:03:56.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.761 "dma_device_type": 2 00:03:56.761 } 00:03:56.761 ], 00:03:56.761 "driver_specific": {} 00:03:56.761 }, 00:03:56.761 { 00:03:56.761 "name": "Passthru0", 00:03:56.761 "aliases": [ 00:03:56.761 "b8579d05-cc72-530e-bbc3-4808832a876e" 00:03:56.761 ], 00:03:56.761 "product_name": "passthru", 00:03:56.761 "block_size": 512, 00:03:56.761 "num_blocks": 16384, 00:03:56.761 "uuid": "b8579d05-cc72-530e-bbc3-4808832a876e", 00:03:56.761 "assigned_rate_limits": { 00:03:56.761 "rw_ios_per_sec": 0, 00:03:56.761 "rw_mbytes_per_sec": 0, 00:03:56.761 "r_mbytes_per_sec": 0, 00:03:56.761 "w_mbytes_per_sec": 0 00:03:56.761 }, 00:03:56.761 "claimed": false, 00:03:56.761 "zoned": false, 00:03:56.761 "supported_io_types": { 00:03:56.761 "read": true, 00:03:56.761 "write": true, 00:03:56.761 "unmap": true, 00:03:56.761 "flush": true, 00:03:56.761 "reset": true, 00:03:56.761 "nvme_admin": false, 00:03:56.761 "nvme_io": false, 00:03:56.761 "nvme_io_md": false, 00:03:56.761 "write_zeroes": true, 00:03:56.761 "zcopy": true, 00:03:56.761 "get_zone_info": false, 00:03:56.761 "zone_management": false, 00:03:56.762 "zone_append": false, 00:03:56.762 "compare": false, 00:03:56.762 "compare_and_write": false, 00:03:56.762 "abort": true, 00:03:56.762 "seek_hole": false, 00:03:56.762 "seek_data": false, 00:03:56.762 "copy": true, 00:03:56.762 "nvme_iov_md": false 00:03:56.762 }, 00:03:56.762 "memory_domains": [ 00:03:56.762 { 00:03:56.762 "dma_device_id": "system", 00:03:56.762 "dma_device_type": 1 00:03:56.762 }, 00:03:56.762 { 00:03:56.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.762 "dma_device_type": 2 00:03:56.762 } 00:03:56.762 ], 00:03:56.762 "driver_specific": { 00:03:56.762 "passthru": { 00:03:56.762 "name": "Passthru0", 00:03:56.762 "base_bdev_name": "Malloc2" 00:03:56.762 } 00:03:56.762 } 00:03:56.762 } 00:03:56.762 ]' 00:03:56.762 17:53:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:56.762 17:53:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:56.762 17:53:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:56.762 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.762 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.762 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.762 17:53:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:56.762 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.762 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.762 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.762 17:53:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:56.762 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.762 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.762 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.762 17:53:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:56.762 17:53:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:56.762 17:53:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:56.762 00:03:56.762 real 0m0.211s 00:03:56.762 user 0m0.135s 00:03:56.762 sys 0m0.022s 00:03:56.762 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.762 17:53:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.762 ************************************ 00:03:56.762 END TEST rpc_daemon_integrity 00:03:56.762 ************************************ 00:03:56.762 17:53:19 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:56.762 17:53:19 rpc -- rpc/rpc.sh@84 -- # killprocess 1341209 00:03:56.762 17:53:19 rpc -- common/autotest_common.sh@954 -- # '[' -z 1341209 ']' 00:03:56.762 17:53:19 rpc -- common/autotest_common.sh@958 -- # kill -0 1341209 00:03:56.762 17:53:19 rpc -- common/autotest_common.sh@959 -- # uname 00:03:56.762 17:53:19 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:56.762 17:53:19 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1341209 00:03:57.019 17:53:19 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:57.020 17:53:19 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:57.020 17:53:19 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1341209' 00:03:57.020 killing process with pid 1341209 00:03:57.020 17:53:19 rpc -- common/autotest_common.sh@973 -- # kill 1341209 00:03:57.020 17:53:19 rpc -- common/autotest_common.sh@978 -- # wait 1341209 00:03:57.279 00:03:57.279 real 0m1.969s 00:03:57.279 user 0m2.448s 00:03:57.279 sys 0m0.599s 00:03:57.279 17:53:20 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.279 17:53:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.279 ************************************ 00:03:57.279 END TEST rpc 00:03:57.279 ************************************ 00:03:57.279 17:53:20 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:57.279 17:53:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.279 17:53:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.279 17:53:20 -- common/autotest_common.sh@10 -- # set +x 00:03:57.279 ************************************ 00:03:57.279 START TEST skip_rpc 00:03:57.279 ************************************ 00:03:57.279 17:53:20 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:57.537 * Looking for test storage... 00:03:57.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:57.537 17:53:20 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:57.537 17:53:20 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:57.537 17:53:20 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:57.537 17:53:20 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.537 17:53:20 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:57.537 17:53:20 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.537 17:53:20 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:57.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.537 --rc genhtml_branch_coverage=1 00:03:57.537 --rc genhtml_function_coverage=1 00:03:57.537 --rc genhtml_legend=1 00:03:57.537 --rc geninfo_all_blocks=1 00:03:57.537 --rc geninfo_unexecuted_blocks=1 00:03:57.537 00:03:57.537 ' 00:03:57.537 17:53:20 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:57.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.537 --rc genhtml_branch_coverage=1 00:03:57.537 --rc genhtml_function_coverage=1 00:03:57.537 --rc genhtml_legend=1 00:03:57.537 --rc geninfo_all_blocks=1 00:03:57.537 --rc geninfo_unexecuted_blocks=1 00:03:57.537 00:03:57.537 ' 00:03:57.537 17:53:20 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:57.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.537 --rc genhtml_branch_coverage=1 00:03:57.537 --rc genhtml_function_coverage=1 00:03:57.537 --rc genhtml_legend=1 00:03:57.537 --rc geninfo_all_blocks=1 00:03:57.537 --rc geninfo_unexecuted_blocks=1 00:03:57.537 00:03:57.537 ' 00:03:57.537 17:53:20 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:57.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.537 --rc genhtml_branch_coverage=1 00:03:57.537 --rc genhtml_function_coverage=1 00:03:57.537 --rc genhtml_legend=1 00:03:57.537 --rc geninfo_all_blocks=1 00:03:57.537 --rc geninfo_unexecuted_blocks=1 00:03:57.537 00:03:57.537 ' 00:03:57.537 17:53:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:57.537 17:53:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:57.537 17:53:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:57.537 17:53:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.537 17:53:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.537 17:53:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.537 ************************************ 00:03:57.537 START TEST skip_rpc 00:03:57.537 ************************************ 00:03:57.537 17:53:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:57.537 17:53:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1341554 00:03:57.537 17:53:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:57.537 17:53:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:57.537 17:53:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:57.537 [2024-12-09 17:53:20.511254] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:03:57.538 [2024-12-09 17:53:20.511337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1341554 ] 00:03:57.795 [2024-12-09 17:53:20.580969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.795 [2024-12-09 17:53:20.641506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1341554 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1341554 ']' 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1341554 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1341554 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1341554' 00:04:03.055 killing process with pid 1341554 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1341554 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1341554 00:04:03.055 00:04:03.055 real 0m5.463s 00:04:03.055 user 0m5.156s 00:04:03.055 sys 0m0.318s 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.055 17:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.055 ************************************ 00:04:03.055 END TEST skip_rpc 00:04:03.055 ************************************ 00:04:03.055 17:53:25 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:03.055 17:53:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.055 17:53:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.055 17:53:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.055 ************************************ 00:04:03.055 START TEST skip_rpc_with_json 00:04:03.055 ************************************ 00:04:03.055 17:53:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:03.055 17:53:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:03.055 17:53:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1342235 00:04:03.055 17:53:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:03.055 17:53:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:03.055 17:53:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1342235 00:04:03.055 17:53:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1342235 ']' 00:04:03.055 17:53:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.055 17:53:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:03.055 17:53:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.055 17:53:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:03.055 17:53:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:03.055 [2024-12-09 17:53:26.023882] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:04:03.055 [2024-12-09 17:53:26.023986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1342235 ] 00:04:03.056 [2024-12-09 17:53:26.092990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.313 [2024-12-09 17:53:26.153035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.571 17:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:03.571 17:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:03.571 17:53:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:03.571 17:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.571 17:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:03.571 [2024-12-09 17:53:26.422517] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:03.571 request: 00:04:03.571 { 00:04:03.571 "trtype": "tcp", 00:04:03.571 "method": "nvmf_get_transports", 00:04:03.571 "req_id": 1 00:04:03.571 } 00:04:03.571 Got JSON-RPC error response 00:04:03.571 response: 00:04:03.572 { 00:04:03.572 "code": -19, 00:04:03.572 "message": "No such device" 00:04:03.572 } 00:04:03.572 17:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:03.572 17:53:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:03.572 17:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.572 17:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:03.572 [2024-12-09 17:53:26.430672] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:03.572 17:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.572 17:53:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:03.572 17:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.572 17:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:03.572 17:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.572 17:53:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:03.572 { 00:04:03.572 "subsystems": [ 00:04:03.572 { 00:04:03.572 "subsystem": "fsdev", 00:04:03.572 "config": [ 00:04:03.572 { 00:04:03.572 "method": "fsdev_set_opts", 00:04:03.572 "params": { 00:04:03.572 "fsdev_io_pool_size": 65535, 00:04:03.572 "fsdev_io_cache_size": 256 00:04:03.572 } 00:04:03.572 } 00:04:03.572 ] 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "subsystem": "vfio_user_target", 00:04:03.572 "config": null 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "subsystem": "keyring", 00:04:03.572 "config": [] 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "subsystem": "iobuf", 00:04:03.572 "config": [ 00:04:03.572 { 00:04:03.572 "method": "iobuf_set_options", 00:04:03.572 "params": { 00:04:03.572 "small_pool_count": 8192, 00:04:03.572 "large_pool_count": 1024, 00:04:03.572 "small_bufsize": 8192, 00:04:03.572 "large_bufsize": 135168, 00:04:03.572 "enable_numa": false 00:04:03.572 } 00:04:03.572 } 00:04:03.572 ] 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "subsystem": "sock", 00:04:03.572 "config": [ 00:04:03.572 { 00:04:03.572 "method": "sock_set_default_impl", 00:04:03.572 "params": { 00:04:03.572 "impl_name": "posix" 00:04:03.572 } 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "method": "sock_impl_set_options", 00:04:03.572 "params": { 00:04:03.572 "impl_name": "ssl", 00:04:03.572 "recv_buf_size": 4096, 00:04:03.572 "send_buf_size": 4096, 00:04:03.572 "enable_recv_pipe": true, 00:04:03.572 "enable_quickack": false, 00:04:03.572 "enable_placement_id": 0, 00:04:03.572 "enable_zerocopy_send_server": true, 00:04:03.572 "enable_zerocopy_send_client": false, 00:04:03.572 "zerocopy_threshold": 0, 00:04:03.572 "tls_version": 0, 00:04:03.572 "enable_ktls": false 00:04:03.572 } 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "method": "sock_impl_set_options", 00:04:03.572 "params": { 00:04:03.572 "impl_name": "posix", 00:04:03.572 "recv_buf_size": 2097152, 00:04:03.572 "send_buf_size": 2097152, 00:04:03.572 "enable_recv_pipe": true, 00:04:03.572 "enable_quickack": false, 00:04:03.572 "enable_placement_id": 0, 00:04:03.572 "enable_zerocopy_send_server": true, 00:04:03.572 "enable_zerocopy_send_client": false, 00:04:03.572 "zerocopy_threshold": 0, 00:04:03.572 "tls_version": 0, 00:04:03.572 "enable_ktls": false 00:04:03.572 } 00:04:03.572 } 00:04:03.572 ] 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "subsystem": "vmd", 00:04:03.572 "config": [] 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "subsystem": "accel", 00:04:03.572 "config": [ 00:04:03.572 { 00:04:03.572 "method": "accel_set_options", 00:04:03.572 "params": { 00:04:03.572 "small_cache_size": 128, 00:04:03.572 "large_cache_size": 16, 00:04:03.572 "task_count": 2048, 00:04:03.572 "sequence_count": 2048, 00:04:03.572 "buf_count": 2048 00:04:03.572 } 00:04:03.572 } 00:04:03.572 ] 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "subsystem": "bdev", 00:04:03.572 "config": [ 00:04:03.572 { 00:04:03.572 "method": "bdev_set_options", 00:04:03.572 "params": { 00:04:03.572 "bdev_io_pool_size": 65535, 00:04:03.572 "bdev_io_cache_size": 256, 00:04:03.572 "bdev_auto_examine": true, 00:04:03.572 "iobuf_small_cache_size": 128, 00:04:03.572 "iobuf_large_cache_size": 16 00:04:03.572 } 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "method": "bdev_raid_set_options", 00:04:03.572 "params": { 00:04:03.572 "process_window_size_kb": 1024, 00:04:03.572 "process_max_bandwidth_mb_sec": 0 00:04:03.572 } 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "method": "bdev_iscsi_set_options", 00:04:03.572 "params": { 00:04:03.572 "timeout_sec": 30 00:04:03.572 } 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "method": "bdev_nvme_set_options", 00:04:03.572 "params": { 00:04:03.572 "action_on_timeout": "none", 00:04:03.572 "timeout_us": 0, 00:04:03.572 "timeout_admin_us": 0, 00:04:03.572 "keep_alive_timeout_ms": 10000, 00:04:03.572 "arbitration_burst": 0, 00:04:03.572 "low_priority_weight": 0, 00:04:03.572 "medium_priority_weight": 0, 00:04:03.572 "high_priority_weight": 0, 00:04:03.572 "nvme_adminq_poll_period_us": 10000, 00:04:03.572 "nvme_ioq_poll_period_us": 0, 00:04:03.572 "io_queue_requests": 0, 00:04:03.572 "delay_cmd_submit": true, 00:04:03.572 "transport_retry_count": 4, 00:04:03.572 "bdev_retry_count": 3, 00:04:03.572 "transport_ack_timeout": 0, 00:04:03.572 "ctrlr_loss_timeout_sec": 0, 00:04:03.572 "reconnect_delay_sec": 0, 00:04:03.572 "fast_io_fail_timeout_sec": 0, 00:04:03.572 "disable_auto_failback": false, 00:04:03.572 "generate_uuids": false, 00:04:03.572 "transport_tos": 0, 00:04:03.572 "nvme_error_stat": false, 00:04:03.572 "rdma_srq_size": 0, 00:04:03.572 "io_path_stat": false, 00:04:03.572 "allow_accel_sequence": false, 00:04:03.572 "rdma_max_cq_size": 0, 00:04:03.572 "rdma_cm_event_timeout_ms": 0, 00:04:03.572 "dhchap_digests": [ 00:04:03.572 "sha256", 00:04:03.572 "sha384", 00:04:03.572 "sha512" 00:04:03.572 ], 00:04:03.572 "dhchap_dhgroups": [ 00:04:03.572 "null", 00:04:03.572 "ffdhe2048", 00:04:03.572 "ffdhe3072", 00:04:03.572 "ffdhe4096", 00:04:03.572 "ffdhe6144", 00:04:03.572 "ffdhe8192" 00:04:03.572 ] 00:04:03.572 } 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "method": "bdev_nvme_set_hotplug", 00:04:03.572 "params": { 00:04:03.572 "period_us": 100000, 00:04:03.572 "enable": false 00:04:03.572 } 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "method": "bdev_wait_for_examine" 00:04:03.572 } 00:04:03.572 ] 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "subsystem": "scsi", 00:04:03.572 "config": null 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "subsystem": "scheduler", 00:04:03.572 "config": [ 00:04:03.572 { 00:04:03.572 "method": "framework_set_scheduler", 00:04:03.572 "params": { 00:04:03.572 "name": "static" 00:04:03.572 } 00:04:03.572 } 00:04:03.572 ] 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "subsystem": "vhost_scsi", 00:04:03.572 "config": [] 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "subsystem": "vhost_blk", 00:04:03.572 "config": [] 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "subsystem": "ublk", 00:04:03.572 "config": [] 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "subsystem": "nbd", 00:04:03.572 "config": [] 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "subsystem": "nvmf", 00:04:03.572 "config": [ 00:04:03.572 { 00:04:03.572 "method": "nvmf_set_config", 00:04:03.572 "params": { 00:04:03.572 "discovery_filter": "match_any", 00:04:03.572 "admin_cmd_passthru": { 00:04:03.572 "identify_ctrlr": false 00:04:03.572 }, 00:04:03.572 "dhchap_digests": [ 00:04:03.572 "sha256", 00:04:03.572 "sha384", 00:04:03.572 "sha512" 00:04:03.572 ], 00:04:03.572 "dhchap_dhgroups": [ 00:04:03.572 "null", 00:04:03.572 "ffdhe2048", 00:04:03.572 "ffdhe3072", 00:04:03.572 "ffdhe4096", 00:04:03.572 "ffdhe6144", 00:04:03.572 "ffdhe8192" 00:04:03.572 ] 00:04:03.572 } 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "method": "nvmf_set_max_subsystems", 00:04:03.572 "params": { 00:04:03.572 "max_subsystems": 1024 00:04:03.572 } 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "method": "nvmf_set_crdt", 00:04:03.572 "params": { 00:04:03.572 "crdt1": 0, 00:04:03.572 "crdt2": 0, 00:04:03.572 "crdt3": 0 00:04:03.572 } 00:04:03.572 }, 00:04:03.572 { 00:04:03.572 "method": "nvmf_create_transport", 00:04:03.572 "params": { 00:04:03.572 "trtype": "TCP", 00:04:03.572 "max_queue_depth": 128, 00:04:03.572 "max_io_qpairs_per_ctrlr": 127, 00:04:03.572 "in_capsule_data_size": 4096, 00:04:03.572 "max_io_size": 131072, 00:04:03.572 "io_unit_size": 131072, 00:04:03.572 "max_aq_depth": 128, 00:04:03.572 "num_shared_buffers": 511, 00:04:03.572 "buf_cache_size": 4294967295, 00:04:03.572 "dif_insert_or_strip": false, 00:04:03.572 "zcopy": false, 00:04:03.572 "c2h_success": true, 00:04:03.572 "sock_priority": 0, 00:04:03.572 "abort_timeout_sec": 1, 00:04:03.572 "ack_timeout": 0, 00:04:03.572 "data_wr_pool_size": 0 00:04:03.572 } 00:04:03.572 } 00:04:03.572 ] 00:04:03.573 }, 00:04:03.573 { 00:04:03.573 "subsystem": "iscsi", 00:04:03.573 "config": [ 00:04:03.573 { 00:04:03.573 "method": "iscsi_set_options", 00:04:03.573 "params": { 00:04:03.573 "node_base": "iqn.2016-06.io.spdk", 00:04:03.573 "max_sessions": 128, 00:04:03.573 "max_connections_per_session": 2, 00:04:03.573 "max_queue_depth": 64, 00:04:03.573 "default_time2wait": 2, 00:04:03.573 "default_time2retain": 20, 00:04:03.573 "first_burst_length": 8192, 00:04:03.573 "immediate_data": true, 00:04:03.573 "allow_duplicated_isid": false, 00:04:03.573 "error_recovery_level": 0, 00:04:03.573 "nop_timeout": 60, 00:04:03.573 "nop_in_interval": 30, 00:04:03.573 "disable_chap": false, 00:04:03.573 "require_chap": false, 00:04:03.573 "mutual_chap": false, 00:04:03.573 "chap_group": 0, 00:04:03.573 "max_large_datain_per_connection": 64, 00:04:03.573 "max_r2t_per_connection": 4, 00:04:03.573 "pdu_pool_size": 36864, 00:04:03.573 "immediate_data_pool_size": 16384, 00:04:03.573 "data_out_pool_size": 2048 00:04:03.573 } 00:04:03.573 } 00:04:03.573 ] 00:04:03.573 } 00:04:03.573 ] 00:04:03.573 } 00:04:03.573 17:53:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:03.573 17:53:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1342235 00:04:03.573 17:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1342235 ']' 00:04:03.573 17:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1342235 00:04:03.573 17:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:03.573 17:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:03.573 17:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1342235 00:04:03.831 17:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:03.831 17:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:03.831 17:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1342235' 00:04:03.831 killing process with pid 1342235 00:04:03.831 17:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1342235 00:04:03.831 17:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1342235 00:04:04.088 17:53:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1342375 00:04:04.088 17:53:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:04.088 17:53:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:09.347 17:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1342375 00:04:09.347 17:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1342375 ']' 00:04:09.347 17:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1342375 00:04:09.347 17:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:09.347 17:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:09.347 17:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1342375 00:04:09.347 17:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:09.347 17:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:09.347 17:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1342375' 00:04:09.347 killing process with pid 1342375 00:04:09.347 17:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1342375 00:04:09.347 17:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1342375 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:09.606 00:04:09.606 real 0m6.538s 00:04:09.606 user 0m6.194s 00:04:09.606 sys 0m0.668s 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:09.606 ************************************ 00:04:09.606 END TEST skip_rpc_with_json 00:04:09.606 ************************************ 00:04:09.606 17:53:32 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:09.606 17:53:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.606 17:53:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.606 17:53:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.606 ************************************ 00:04:09.606 START TEST skip_rpc_with_delay 00:04:09.606 ************************************ 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:09.606 [2024-12-09 17:53:32.614907] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:09.606 00:04:09.606 real 0m0.073s 00:04:09.606 user 0m0.050s 00:04:09.606 sys 0m0.023s 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.606 17:53:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:09.606 ************************************ 00:04:09.606 END TEST skip_rpc_with_delay 00:04:09.606 ************************************ 00:04:09.864 17:53:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:09.864 17:53:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:09.864 17:53:32 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:09.864 17:53:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.864 17:53:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.864 17:53:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.864 ************************************ 00:04:09.864 START TEST exit_on_failed_rpc_init 00:04:09.864 ************************************ 00:04:09.864 17:53:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:09.864 17:53:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1343088 00:04:09.865 17:53:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:09.865 17:53:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1343088 00:04:09.865 17:53:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1343088 ']' 00:04:09.865 17:53:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.865 17:53:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:09.865 17:53:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.865 17:53:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:09.865 17:53:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:09.865 [2024-12-09 17:53:32.742021] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:04:09.865 [2024-12-09 17:53:32.742110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343088 ] 00:04:09.865 [2024-12-09 17:53:32.807079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.865 [2024-12-09 17:53:32.864647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.123 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:10.123 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:10.123 17:53:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.123 17:53:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:10.123 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:10.123 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:10.123 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.123 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:10.123 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.123 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:10.123 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.123 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:10.123 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.123 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:10.123 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:10.381 [2024-12-09 17:53:33.189656] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:04:10.381 [2024-12-09 17:53:33.189732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343219 ] 00:04:10.381 [2024-12-09 17:53:33.253701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.381 [2024-12-09 17:53:33.312792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.381 [2024-12-09 17:53:33.312925] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:10.381 [2024-12-09 17:53:33.312945] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:10.381 [2024-12-09 17:53:33.312957] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:10.381 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:10.381 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:10.381 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:10.381 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:10.381 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:10.381 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:10.381 17:53:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:10.381 17:53:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1343088 00:04:10.381 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1343088 ']' 00:04:10.381 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1343088 00:04:10.381 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:10.381 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:10.381 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1343088 00:04:10.638 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:10.638 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:10.639 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1343088' 00:04:10.639 killing process with pid 1343088 00:04:10.639 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1343088 00:04:10.639 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1343088 00:04:10.897 00:04:10.897 real 0m1.170s 00:04:10.897 user 0m1.295s 00:04:10.897 sys 0m0.428s 00:04:10.897 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.897 17:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:10.897 ************************************ 00:04:10.897 END TEST exit_on_failed_rpc_init 00:04:10.897 ************************************ 00:04:10.897 17:53:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:10.897 00:04:10.897 real 0m13.605s 00:04:10.897 user 0m12.872s 00:04:10.897 sys 0m1.640s 00:04:10.897 17:53:33 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.897 17:53:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.897 ************************************ 00:04:10.897 END TEST skip_rpc 00:04:10.897 ************************************ 00:04:10.897 17:53:33 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:10.897 17:53:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.897 17:53:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.897 17:53:33 -- common/autotest_common.sh@10 -- # set +x 00:04:10.897 ************************************ 00:04:10.897 START TEST rpc_client 00:04:10.897 ************************************ 00:04:10.897 17:53:33 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:11.156 * Looking for test storage... 00:04:11.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:11.156 17:53:33 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:11.156 17:53:33 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:11.156 17:53:33 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:11.156 17:53:34 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.156 17:53:34 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:11.156 17:53:34 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.156 17:53:34 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:11.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.156 --rc genhtml_branch_coverage=1 00:04:11.156 --rc genhtml_function_coverage=1 00:04:11.156 --rc genhtml_legend=1 00:04:11.156 --rc geninfo_all_blocks=1 00:04:11.156 --rc geninfo_unexecuted_blocks=1 00:04:11.156 00:04:11.156 ' 00:04:11.156 17:53:34 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:11.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.157 --rc genhtml_branch_coverage=1 00:04:11.157 --rc genhtml_function_coverage=1 00:04:11.157 --rc genhtml_legend=1 00:04:11.157 --rc geninfo_all_blocks=1 00:04:11.157 --rc geninfo_unexecuted_blocks=1 00:04:11.157 00:04:11.157 ' 00:04:11.157 17:53:34 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:11.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.157 --rc genhtml_branch_coverage=1 00:04:11.157 --rc genhtml_function_coverage=1 00:04:11.157 --rc genhtml_legend=1 00:04:11.157 --rc geninfo_all_blocks=1 00:04:11.157 --rc geninfo_unexecuted_blocks=1 00:04:11.157 00:04:11.157 ' 00:04:11.157 17:53:34 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:11.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.157 --rc genhtml_branch_coverage=1 00:04:11.157 --rc genhtml_function_coverage=1 00:04:11.157 --rc genhtml_legend=1 00:04:11.157 --rc geninfo_all_blocks=1 00:04:11.157 --rc geninfo_unexecuted_blocks=1 00:04:11.157 00:04:11.157 ' 00:04:11.157 17:53:34 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:11.157 OK 00:04:11.157 17:53:34 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:11.157 00:04:11.157 real 0m0.156s 00:04:11.157 user 0m0.101s 00:04:11.157 sys 0m0.064s 00:04:11.157 17:53:34 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.157 17:53:34 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:11.157 ************************************ 00:04:11.157 END TEST rpc_client 00:04:11.157 ************************************ 00:04:11.157 17:53:34 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:11.157 17:53:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.157 17:53:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.157 17:53:34 -- common/autotest_common.sh@10 -- # set +x 00:04:11.157 ************************************ 00:04:11.157 START TEST json_config 00:04:11.157 ************************************ 00:04:11.157 17:53:34 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:11.157 17:53:34 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:11.157 17:53:34 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:11.157 17:53:34 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:11.416 17:53:34 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:11.416 17:53:34 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.416 17:53:34 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.416 17:53:34 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.416 17:53:34 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.416 17:53:34 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.416 17:53:34 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.416 17:53:34 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.416 17:53:34 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.416 17:53:34 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.416 17:53:34 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.416 17:53:34 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.416 17:53:34 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:11.416 17:53:34 json_config -- scripts/common.sh@345 -- # : 1 00:04:11.416 17:53:34 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.416 17:53:34 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.416 17:53:34 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:11.416 17:53:34 json_config -- scripts/common.sh@353 -- # local d=1 00:04:11.416 17:53:34 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.416 17:53:34 json_config -- scripts/common.sh@355 -- # echo 1 00:04:11.416 17:53:34 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.416 17:53:34 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:11.416 17:53:34 json_config -- scripts/common.sh@353 -- # local d=2 00:04:11.416 17:53:34 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.416 17:53:34 json_config -- scripts/common.sh@355 -- # echo 2 00:04:11.416 17:53:34 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.416 17:53:34 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.416 17:53:34 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.416 17:53:34 json_config -- scripts/common.sh@368 -- # return 0 00:04:11.416 17:53:34 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.416 17:53:34 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:11.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.416 --rc genhtml_branch_coverage=1 00:04:11.416 --rc genhtml_function_coverage=1 00:04:11.416 --rc genhtml_legend=1 00:04:11.416 --rc geninfo_all_blocks=1 00:04:11.416 --rc geninfo_unexecuted_blocks=1 00:04:11.416 00:04:11.416 ' 00:04:11.416 17:53:34 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:11.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.416 --rc genhtml_branch_coverage=1 00:04:11.416 --rc genhtml_function_coverage=1 00:04:11.416 --rc genhtml_legend=1 00:04:11.416 --rc geninfo_all_blocks=1 00:04:11.416 --rc geninfo_unexecuted_blocks=1 00:04:11.416 00:04:11.416 ' 00:04:11.416 17:53:34 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:11.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.416 --rc genhtml_branch_coverage=1 00:04:11.416 --rc genhtml_function_coverage=1 00:04:11.416 --rc genhtml_legend=1 00:04:11.416 --rc geninfo_all_blocks=1 00:04:11.416 --rc geninfo_unexecuted_blocks=1 00:04:11.416 00:04:11.416 ' 00:04:11.416 17:53:34 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:11.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.416 --rc genhtml_branch_coverage=1 00:04:11.416 --rc genhtml_function_coverage=1 00:04:11.416 --rc genhtml_legend=1 00:04:11.416 --rc geninfo_all_blocks=1 00:04:11.416 --rc geninfo_unexecuted_blocks=1 00:04:11.416 00:04:11.416 ' 00:04:11.416 17:53:34 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:11.416 17:53:34 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:11.416 17:53:34 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:11.416 17:53:34 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:11.416 17:53:34 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:11.416 17:53:34 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:11.416 17:53:34 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:11.416 17:53:34 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:11.416 17:53:34 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:11.416 17:53:34 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:11.416 17:53:34 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:11.416 17:53:34 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:11.416 17:53:34 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:11.416 17:53:34 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:11.416 17:53:34 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:11.416 17:53:34 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:11.416 17:53:34 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:11.416 17:53:34 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:11.416 17:53:34 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:11.416 17:53:34 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:11.416 17:53:34 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:11.416 17:53:34 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:11.416 17:53:34 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:11.416 17:53:34 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.416 17:53:34 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.417 17:53:34 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.417 17:53:34 json_config -- paths/export.sh@5 -- # export PATH 00:04:11.417 17:53:34 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.417 17:53:34 json_config -- nvmf/common.sh@51 -- # : 0 00:04:11.417 17:53:34 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:11.417 17:53:34 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:11.417 17:53:34 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:11.417 17:53:34 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:11.417 17:53:34 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:11.417 17:53:34 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:11.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:11.417 17:53:34 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:11.417 17:53:34 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:11.417 17:53:34 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:11.417 17:53:34 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:11.417 17:53:34 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:11.417 17:53:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:11.417 17:53:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:11.417 17:53:34 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:11.417 17:53:34 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:11.417 17:53:34 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:11.417 17:53:34 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:11.417 17:53:34 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:11.417 17:53:34 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:11.417 17:53:34 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:11.417 17:53:34 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:11.417 17:53:34 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:11.417 17:53:34 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:11.417 17:53:34 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:11.417 17:53:34 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:11.417 INFO: JSON configuration test init 00:04:11.417 17:53:34 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:11.417 17:53:34 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:11.417 17:53:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:11.417 17:53:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.417 17:53:34 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:11.417 17:53:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:11.417 17:53:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.417 17:53:34 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:11.417 17:53:34 json_config -- json_config/common.sh@9 -- # local app=target 00:04:11.417 17:53:34 json_config -- json_config/common.sh@10 -- # shift 00:04:11.417 17:53:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:11.417 17:53:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:11.417 17:53:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:11.417 17:53:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:11.417 17:53:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:11.417 17:53:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1343495 00:04:11.417 17:53:34 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:11.417 17:53:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:11.417 Waiting for target to run... 00:04:11.417 17:53:34 json_config -- json_config/common.sh@25 -- # waitforlisten 1343495 /var/tmp/spdk_tgt.sock 00:04:11.417 17:53:34 json_config -- common/autotest_common.sh@835 -- # '[' -z 1343495 ']' 00:04:11.417 17:53:34 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:11.417 17:53:34 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:11.417 17:53:34 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:11.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:11.417 17:53:34 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:11.417 17:53:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.417 [2024-12-09 17:53:34.321542] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:04:11.417 [2024-12-09 17:53:34.321641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343495 ] 00:04:11.985 [2024-12-09 17:53:34.835853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.985 [2024-12-09 17:53:34.887913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.548 17:53:35 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:12.548 17:53:35 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:12.548 17:53:35 json_config -- json_config/common.sh@26 -- # echo '' 00:04:12.548 00:04:12.548 17:53:35 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:12.548 17:53:35 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:12.548 17:53:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:12.548 17:53:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.548 17:53:35 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:12.548 17:53:35 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:12.548 17:53:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:12.548 17:53:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.548 17:53:35 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:12.548 17:53:35 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:12.548 17:53:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:15.830 17:53:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.830 17:53:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:15.830 17:53:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@54 -- # sort 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:15.830 17:53:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:15.830 17:53:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:15.830 17:53:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.830 17:53:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:15.830 17:53:38 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:15.830 17:53:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:16.088 MallocForNvmf0 00:04:16.088 17:53:39 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:16.088 17:53:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:16.346 MallocForNvmf1 00:04:16.346 17:53:39 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:16.346 17:53:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:16.603 [2024-12-09 17:53:39.612507] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:16.603 17:53:39 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:16.603 17:53:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:17.168 17:53:39 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:17.168 17:53:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:17.168 17:53:40 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:17.168 17:53:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:17.425 17:53:40 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:17.425 17:53:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:17.682 [2024-12-09 17:53:40.692018] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:17.682 17:53:40 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:17.682 17:53:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:17.682 17:53:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.940 17:53:40 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:17.940 17:53:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:17.940 17:53:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.940 17:53:40 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:17.940 17:53:40 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:17.940 17:53:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:18.197 MallocBdevForConfigChangeCheck 00:04:18.197 17:53:41 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:18.197 17:53:41 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:18.197 17:53:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.197 17:53:41 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:18.197 17:53:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:18.455 17:53:41 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:18.455 INFO: shutting down applications... 00:04:18.455 17:53:41 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:18.455 17:53:41 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:18.455 17:53:41 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:18.455 17:53:41 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:20.416 Calling clear_iscsi_subsystem 00:04:20.416 Calling clear_nvmf_subsystem 00:04:20.416 Calling clear_nbd_subsystem 00:04:20.416 Calling clear_ublk_subsystem 00:04:20.416 Calling clear_vhost_blk_subsystem 00:04:20.416 Calling clear_vhost_scsi_subsystem 00:04:20.416 Calling clear_bdev_subsystem 00:04:20.416 17:53:43 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:20.416 17:53:43 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:20.416 17:53:43 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:20.416 17:53:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:20.416 17:53:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:20.416 17:53:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:20.674 17:53:43 json_config -- json_config/json_config.sh@352 -- # break 00:04:20.674 17:53:43 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:20.674 17:53:43 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:20.674 17:53:43 json_config -- json_config/common.sh@31 -- # local app=target 00:04:20.674 17:53:43 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:20.674 17:53:43 json_config -- json_config/common.sh@35 -- # [[ -n 1343495 ]] 00:04:20.674 17:53:43 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1343495 00:04:20.674 17:53:43 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:20.674 17:53:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:20.674 17:53:43 json_config -- json_config/common.sh@41 -- # kill -0 1343495 00:04:20.674 17:53:43 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:21.243 17:53:44 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:21.243 17:53:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:21.243 17:53:44 json_config -- json_config/common.sh@41 -- # kill -0 1343495 00:04:21.243 17:53:44 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:21.243 17:53:44 json_config -- json_config/common.sh@43 -- # break 00:04:21.243 17:53:44 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:21.243 17:53:44 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:21.243 SPDK target shutdown done 00:04:21.243 17:53:44 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:21.243 INFO: relaunching applications... 00:04:21.243 17:53:44 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:21.243 17:53:44 json_config -- json_config/common.sh@9 -- # local app=target 00:04:21.243 17:53:44 json_config -- json_config/common.sh@10 -- # shift 00:04:21.243 17:53:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:21.243 17:53:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:21.243 17:53:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:21.243 17:53:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:21.243 17:53:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:21.243 17:53:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1344708 00:04:21.243 17:53:44 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:21.243 17:53:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:21.243 Waiting for target to run... 00:04:21.243 17:53:44 json_config -- json_config/common.sh@25 -- # waitforlisten 1344708 /var/tmp/spdk_tgt.sock 00:04:21.243 17:53:44 json_config -- common/autotest_common.sh@835 -- # '[' -z 1344708 ']' 00:04:21.243 17:53:44 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:21.243 17:53:44 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.243 17:53:44 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:21.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:21.243 17:53:44 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.243 17:53:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.243 [2024-12-09 17:53:44.075871] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:04:21.243 [2024-12-09 17:53:44.075957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1344708 ] 00:04:21.812 [2024-12-09 17:53:44.608135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.812 [2024-12-09 17:53:44.659395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.095 [2024-12-09 17:53:47.717986] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:25.095 [2024-12-09 17:53:47.750462] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:25.095 17:53:47 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.095 17:53:47 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:25.095 17:53:47 json_config -- json_config/common.sh@26 -- # echo '' 00:04:25.095 00:04:25.095 17:53:47 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:25.095 17:53:47 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:25.095 INFO: Checking if target configuration is the same... 00:04:25.095 17:53:47 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:25.095 17:53:47 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:25.095 17:53:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:25.095 + '[' 2 -ne 2 ']' 00:04:25.095 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:25.095 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:25.095 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:25.095 +++ basename /dev/fd/62 00:04:25.095 ++ mktemp /tmp/62.XXX 00:04:25.095 + tmp_file_1=/tmp/62.JnX 00:04:25.095 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:25.095 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:25.095 + tmp_file_2=/tmp/spdk_tgt_config.json.QKY 00:04:25.095 + ret=0 00:04:25.095 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:25.354 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:25.354 + diff -u /tmp/62.JnX /tmp/spdk_tgt_config.json.QKY 00:04:25.354 + echo 'INFO: JSON config files are the same' 00:04:25.354 INFO: JSON config files are the same 00:04:25.354 + rm /tmp/62.JnX /tmp/spdk_tgt_config.json.QKY 00:04:25.354 + exit 0 00:04:25.354 17:53:48 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:25.354 17:53:48 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:25.354 INFO: changing configuration and checking if this can be detected... 00:04:25.354 17:53:48 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:25.354 17:53:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:25.615 17:53:48 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:25.615 17:53:48 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:25.615 17:53:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:25.615 + '[' 2 -ne 2 ']' 00:04:25.615 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:25.615 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:25.615 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:25.615 +++ basename /dev/fd/62 00:04:25.615 ++ mktemp /tmp/62.XXX 00:04:25.615 + tmp_file_1=/tmp/62.pEd 00:04:25.615 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:25.615 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:25.615 + tmp_file_2=/tmp/spdk_tgt_config.json.dVD 00:04:25.615 + ret=0 00:04:25.615 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:26.180 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:26.180 + diff -u /tmp/62.pEd /tmp/spdk_tgt_config.json.dVD 00:04:26.180 + ret=1 00:04:26.180 + echo '=== Start of file: /tmp/62.pEd ===' 00:04:26.180 + cat /tmp/62.pEd 00:04:26.180 + echo '=== End of file: /tmp/62.pEd ===' 00:04:26.180 + echo '' 00:04:26.180 + echo '=== Start of file: /tmp/spdk_tgt_config.json.dVD ===' 00:04:26.180 + cat /tmp/spdk_tgt_config.json.dVD 00:04:26.180 + echo '=== End of file: /tmp/spdk_tgt_config.json.dVD ===' 00:04:26.181 + echo '' 00:04:26.181 + rm /tmp/62.pEd /tmp/spdk_tgt_config.json.dVD 00:04:26.181 + exit 1 00:04:26.181 17:53:48 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:26.181 INFO: configuration change detected. 00:04:26.181 17:53:48 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:26.181 17:53:48 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:26.181 17:53:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.181 17:53:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.181 17:53:48 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:26.181 17:53:48 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:26.181 17:53:48 json_config -- json_config/json_config.sh@324 -- # [[ -n 1344708 ]] 00:04:26.181 17:53:48 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:26.181 17:53:48 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:26.181 17:53:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.181 17:53:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.181 17:53:48 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:26.181 17:53:48 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:26.181 17:53:48 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:26.181 17:53:48 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:26.181 17:53:49 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:26.181 17:53:49 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:26.181 17:53:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:26.181 17:53:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.181 17:53:49 json_config -- json_config/json_config.sh@330 -- # killprocess 1344708 00:04:26.181 17:53:49 json_config -- common/autotest_common.sh@954 -- # '[' -z 1344708 ']' 00:04:26.181 17:53:49 json_config -- common/autotest_common.sh@958 -- # kill -0 1344708 00:04:26.181 17:53:49 json_config -- common/autotest_common.sh@959 -- # uname 00:04:26.181 17:53:49 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:26.181 17:53:49 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1344708 00:04:26.181 17:53:49 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:26.181 17:53:49 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:26.181 17:53:49 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1344708' 00:04:26.181 killing process with pid 1344708 00:04:26.181 17:53:49 json_config -- common/autotest_common.sh@973 -- # kill 1344708 00:04:26.181 17:53:49 json_config -- common/autotest_common.sh@978 -- # wait 1344708 00:04:28.080 17:53:50 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:28.080 17:53:50 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:28.080 17:53:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:28.080 17:53:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.080 17:53:50 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:28.080 17:53:50 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:28.080 INFO: Success 00:04:28.080 00:04:28.080 real 0m16.543s 00:04:28.080 user 0m17.963s 00:04:28.080 sys 0m2.790s 00:04:28.080 17:53:50 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.080 17:53:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.080 ************************************ 00:04:28.080 END TEST json_config 00:04:28.080 ************************************ 00:04:28.080 17:53:50 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:28.080 17:53:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.080 17:53:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.080 17:53:50 -- common/autotest_common.sh@10 -- # set +x 00:04:28.080 ************************************ 00:04:28.080 START TEST json_config_extra_key 00:04:28.080 ************************************ 00:04:28.080 17:53:50 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:28.080 17:53:50 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:28.080 17:53:50 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:28.080 17:53:50 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:28.080 17:53:50 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:28.080 17:53:50 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.080 17:53:50 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:28.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.080 --rc genhtml_branch_coverage=1 00:04:28.080 --rc genhtml_function_coverage=1 00:04:28.080 --rc genhtml_legend=1 00:04:28.080 --rc geninfo_all_blocks=1 00:04:28.080 --rc geninfo_unexecuted_blocks=1 00:04:28.080 00:04:28.080 ' 00:04:28.080 17:53:50 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:28.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.080 --rc genhtml_branch_coverage=1 00:04:28.080 --rc genhtml_function_coverage=1 00:04:28.080 --rc genhtml_legend=1 00:04:28.080 --rc geninfo_all_blocks=1 00:04:28.080 --rc geninfo_unexecuted_blocks=1 00:04:28.080 00:04:28.080 ' 00:04:28.080 17:53:50 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:28.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.080 --rc genhtml_branch_coverage=1 00:04:28.080 --rc genhtml_function_coverage=1 00:04:28.080 --rc genhtml_legend=1 00:04:28.080 --rc geninfo_all_blocks=1 00:04:28.080 --rc geninfo_unexecuted_blocks=1 00:04:28.080 00:04:28.080 ' 00:04:28.080 17:53:50 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:28.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.080 --rc genhtml_branch_coverage=1 00:04:28.080 --rc genhtml_function_coverage=1 00:04:28.080 --rc genhtml_legend=1 00:04:28.080 --rc geninfo_all_blocks=1 00:04:28.080 --rc geninfo_unexecuted_blocks=1 00:04:28.080 00:04:28.080 ' 00:04:28.080 17:53:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:28.080 17:53:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:28.080 17:53:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:28.080 17:53:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:28.080 17:53:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:28.080 17:53:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:28.080 17:53:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:28.080 17:53:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:28.080 17:53:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:28.080 17:53:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:28.080 17:53:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:28.080 17:53:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:28.080 17:53:50 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:28.080 17:53:50 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:28.080 17:53:50 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:28.080 17:53:50 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:28.080 17:53:50 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:28.080 17:53:50 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:28.080 17:53:50 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:28.080 17:53:50 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:28.080 17:53:50 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.080 17:53:50 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.080 17:53:50 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.081 17:53:50 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:28.081 17:53:50 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.081 17:53:50 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:28.081 17:53:50 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:28.081 17:53:50 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:28.081 17:53:50 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:28.081 17:53:50 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:28.081 17:53:50 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:28.081 17:53:50 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:28.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:28.081 17:53:50 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:28.081 17:53:50 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:28.081 17:53:50 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:28.081 17:53:50 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:28.081 17:53:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:28.081 17:53:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:28.081 17:53:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:28.081 17:53:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:28.081 17:53:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:28.081 17:53:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:28.081 17:53:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:28.081 17:53:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:28.081 17:53:50 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:28.081 17:53:50 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:28.081 INFO: launching applications... 00:04:28.081 17:53:50 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:28.081 17:53:50 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:28.081 17:53:50 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:28.081 17:53:50 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:28.081 17:53:50 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:28.081 17:53:50 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:28.081 17:53:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.081 17:53:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.081 17:53:50 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1345625 00:04:28.081 17:53:50 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:28.081 17:53:50 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:28.081 Waiting for target to run... 00:04:28.081 17:53:50 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1345625 /var/tmp/spdk_tgt.sock 00:04:28.081 17:53:50 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1345625 ']' 00:04:28.081 17:53:50 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:28.081 17:53:50 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.081 17:53:50 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:28.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:28.081 17:53:50 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.081 17:53:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:28.081 [2024-12-09 17:53:50.917786] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:04:28.081 [2024-12-09 17:53:50.917878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1345625 ] 00:04:28.649 [2024-12-09 17:53:51.430376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.649 [2024-12-09 17:53:51.482890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.907 17:53:51 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.907 17:53:51 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:28.907 17:53:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:28.907 00:04:28.907 17:53:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:28.907 INFO: shutting down applications... 00:04:28.907 17:53:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:28.907 17:53:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:28.907 17:53:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:28.907 17:53:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1345625 ]] 00:04:28.907 17:53:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1345625 00:04:28.907 17:53:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:28.907 17:53:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:28.907 17:53:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1345625 00:04:28.907 17:53:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:29.474 17:53:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:29.474 17:53:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.474 17:53:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1345625 00:04:29.474 17:53:52 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:29.474 17:53:52 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:29.474 17:53:52 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:29.474 17:53:52 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:29.474 SPDK target shutdown done 00:04:29.474 17:53:52 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:29.474 Success 00:04:29.474 00:04:29.474 real 0m1.689s 00:04:29.474 user 0m1.531s 00:04:29.474 sys 0m0.629s 00:04:29.474 17:53:52 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.474 17:53:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:29.474 ************************************ 00:04:29.474 END TEST json_config_extra_key 00:04:29.474 ************************************ 00:04:29.474 17:53:52 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:29.474 17:53:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.474 17:53:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.474 17:53:52 -- common/autotest_common.sh@10 -- # set +x 00:04:29.474 ************************************ 00:04:29.474 START TEST alias_rpc 00:04:29.474 ************************************ 00:04:29.474 17:53:52 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:29.733 * Looking for test storage... 00:04:29.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:29.733 17:53:52 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:29.733 17:53:52 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:29.733 17:53:52 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:29.733 17:53:52 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.733 17:53:52 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:29.733 17:53:52 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.733 17:53:52 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:29.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.733 --rc genhtml_branch_coverage=1 00:04:29.733 --rc genhtml_function_coverage=1 00:04:29.733 --rc genhtml_legend=1 00:04:29.733 --rc geninfo_all_blocks=1 00:04:29.733 --rc geninfo_unexecuted_blocks=1 00:04:29.733 00:04:29.733 ' 00:04:29.733 17:53:52 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:29.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.733 --rc genhtml_branch_coverage=1 00:04:29.733 --rc genhtml_function_coverage=1 00:04:29.733 --rc genhtml_legend=1 00:04:29.733 --rc geninfo_all_blocks=1 00:04:29.733 --rc geninfo_unexecuted_blocks=1 00:04:29.733 00:04:29.733 ' 00:04:29.733 17:53:52 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:29.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.733 --rc genhtml_branch_coverage=1 00:04:29.733 --rc genhtml_function_coverage=1 00:04:29.733 --rc genhtml_legend=1 00:04:29.733 --rc geninfo_all_blocks=1 00:04:29.733 --rc geninfo_unexecuted_blocks=1 00:04:29.733 00:04:29.733 ' 00:04:29.733 17:53:52 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:29.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.733 --rc genhtml_branch_coverage=1 00:04:29.733 --rc genhtml_function_coverage=1 00:04:29.733 --rc genhtml_legend=1 00:04:29.733 --rc geninfo_all_blocks=1 00:04:29.733 --rc geninfo_unexecuted_blocks=1 00:04:29.733 00:04:29.733 ' 00:04:29.733 17:53:52 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:29.733 17:53:52 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1345941 00:04:29.733 17:53:52 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.733 17:53:52 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1345941 00:04:29.733 17:53:52 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1345941 ']' 00:04:29.733 17:53:52 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.733 17:53:52 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.733 17:53:52 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.733 17:53:52 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.733 17:53:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.733 [2024-12-09 17:53:52.665178] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:04:29.733 [2024-12-09 17:53:52.665274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1345941 ] 00:04:29.733 [2024-12-09 17:53:52.730079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.992 [2024-12-09 17:53:52.788786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.250 17:53:53 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.250 17:53:53 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:30.250 17:53:53 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:30.507 17:53:53 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1345941 00:04:30.507 17:53:53 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1345941 ']' 00:04:30.507 17:53:53 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1345941 00:04:30.507 17:53:53 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:30.507 17:53:53 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.507 17:53:53 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1345941 00:04:30.507 17:53:53 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.507 17:53:53 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.507 17:53:53 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1345941' 00:04:30.507 killing process with pid 1345941 00:04:30.507 17:53:53 alias_rpc -- common/autotest_common.sh@973 -- # kill 1345941 00:04:30.507 17:53:53 alias_rpc -- common/autotest_common.sh@978 -- # wait 1345941 00:04:30.765 00:04:30.765 real 0m1.332s 00:04:30.765 user 0m1.477s 00:04:30.766 sys 0m0.410s 00:04:30.766 17:53:53 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.766 17:53:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.766 ************************************ 00:04:30.766 END TEST alias_rpc 00:04:30.766 ************************************ 00:04:31.024 17:53:53 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:31.024 17:53:53 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:31.024 17:53:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.024 17:53:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.024 17:53:53 -- common/autotest_common.sh@10 -- # set +x 00:04:31.024 ************************************ 00:04:31.024 START TEST spdkcli_tcp 00:04:31.024 ************************************ 00:04:31.024 17:53:53 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:31.024 * Looking for test storage... 00:04:31.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:31.024 17:53:53 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:31.024 17:53:53 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:31.024 17:53:53 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:31.024 17:53:53 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.024 17:53:53 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:31.024 17:53:53 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.024 17:53:53 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:31.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.024 --rc genhtml_branch_coverage=1 00:04:31.024 --rc genhtml_function_coverage=1 00:04:31.024 --rc genhtml_legend=1 00:04:31.024 --rc geninfo_all_blocks=1 00:04:31.024 --rc geninfo_unexecuted_blocks=1 00:04:31.024 00:04:31.024 ' 00:04:31.024 17:53:53 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:31.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.024 --rc genhtml_branch_coverage=1 00:04:31.024 --rc genhtml_function_coverage=1 00:04:31.024 --rc genhtml_legend=1 00:04:31.024 --rc geninfo_all_blocks=1 00:04:31.024 --rc geninfo_unexecuted_blocks=1 00:04:31.024 00:04:31.024 ' 00:04:31.024 17:53:53 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:31.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.024 --rc genhtml_branch_coverage=1 00:04:31.024 --rc genhtml_function_coverage=1 00:04:31.024 --rc genhtml_legend=1 00:04:31.024 --rc geninfo_all_blocks=1 00:04:31.024 --rc geninfo_unexecuted_blocks=1 00:04:31.024 00:04:31.024 ' 00:04:31.024 17:53:53 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:31.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.024 --rc genhtml_branch_coverage=1 00:04:31.024 --rc genhtml_function_coverage=1 00:04:31.024 --rc genhtml_legend=1 00:04:31.024 --rc geninfo_all_blocks=1 00:04:31.024 --rc geninfo_unexecuted_blocks=1 00:04:31.024 00:04:31.024 ' 00:04:31.024 17:53:53 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:31.024 17:53:53 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:31.024 17:53:53 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:31.024 17:53:53 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:31.024 17:53:53 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:31.024 17:53:53 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:31.024 17:53:53 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:31.024 17:53:53 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:31.024 17:53:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:31.024 17:53:54 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1346140 00:04:31.024 17:53:54 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:31.024 17:53:54 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1346140 00:04:31.024 17:53:54 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1346140 ']' 00:04:31.024 17:53:54 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.025 17:53:54 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.025 17:53:54 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.025 17:53:54 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.025 17:53:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:31.025 [2024-12-09 17:53:54.054642] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:04:31.025 [2024-12-09 17:53:54.054729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1346140 ] 00:04:31.283 [2024-12-09 17:53:54.120300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:31.283 [2024-12-09 17:53:54.180287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.283 [2024-12-09 17:53:54.180291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.541 17:53:54 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.541 17:53:54 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:31.541 17:53:54 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1346266 00:04:31.541 17:53:54 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:31.541 17:53:54 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:31.799 [ 00:04:31.799 "bdev_malloc_delete", 00:04:31.799 "bdev_malloc_create", 00:04:31.799 "bdev_null_resize", 00:04:31.799 "bdev_null_delete", 00:04:31.799 "bdev_null_create", 00:04:31.799 "bdev_nvme_cuse_unregister", 00:04:31.799 "bdev_nvme_cuse_register", 00:04:31.799 "bdev_opal_new_user", 00:04:31.799 "bdev_opal_set_lock_state", 00:04:31.799 "bdev_opal_delete", 00:04:31.799 "bdev_opal_get_info", 00:04:31.799 "bdev_opal_create", 00:04:31.799 "bdev_nvme_opal_revert", 00:04:31.799 "bdev_nvme_opal_init", 00:04:31.799 "bdev_nvme_send_cmd", 00:04:31.799 "bdev_nvme_set_keys", 00:04:31.799 "bdev_nvme_get_path_iostat", 00:04:31.799 "bdev_nvme_get_mdns_discovery_info", 00:04:31.799 "bdev_nvme_stop_mdns_discovery", 00:04:31.799 "bdev_nvme_start_mdns_discovery", 00:04:31.799 "bdev_nvme_set_multipath_policy", 00:04:31.799 "bdev_nvme_set_preferred_path", 00:04:31.799 "bdev_nvme_get_io_paths", 00:04:31.799 "bdev_nvme_remove_error_injection", 00:04:31.799 "bdev_nvme_add_error_injection", 00:04:31.799 "bdev_nvme_get_discovery_info", 00:04:31.799 "bdev_nvme_stop_discovery", 00:04:31.799 "bdev_nvme_start_discovery", 00:04:31.799 "bdev_nvme_get_controller_health_info", 00:04:31.799 "bdev_nvme_disable_controller", 00:04:31.799 "bdev_nvme_enable_controller", 00:04:31.799 "bdev_nvme_reset_controller", 00:04:31.799 "bdev_nvme_get_transport_statistics", 00:04:31.799 "bdev_nvme_apply_firmware", 00:04:31.799 "bdev_nvme_detach_controller", 00:04:31.799 "bdev_nvme_get_controllers", 00:04:31.799 "bdev_nvme_attach_controller", 00:04:31.799 "bdev_nvme_set_hotplug", 00:04:31.799 "bdev_nvme_set_options", 00:04:31.799 "bdev_passthru_delete", 00:04:31.799 "bdev_passthru_create", 00:04:31.799 "bdev_lvol_set_parent_bdev", 00:04:31.799 "bdev_lvol_set_parent", 00:04:31.799 "bdev_lvol_check_shallow_copy", 00:04:31.799 "bdev_lvol_start_shallow_copy", 00:04:31.799 "bdev_lvol_grow_lvstore", 00:04:31.799 "bdev_lvol_get_lvols", 00:04:31.799 "bdev_lvol_get_lvstores", 00:04:31.799 "bdev_lvol_delete", 00:04:31.799 "bdev_lvol_set_read_only", 00:04:31.799 "bdev_lvol_resize", 00:04:31.799 "bdev_lvol_decouple_parent", 00:04:31.799 "bdev_lvol_inflate", 00:04:31.799 "bdev_lvol_rename", 00:04:31.799 "bdev_lvol_clone_bdev", 00:04:31.799 "bdev_lvol_clone", 00:04:31.799 "bdev_lvol_snapshot", 00:04:31.799 "bdev_lvol_create", 00:04:31.799 "bdev_lvol_delete_lvstore", 00:04:31.799 "bdev_lvol_rename_lvstore", 00:04:31.799 "bdev_lvol_create_lvstore", 00:04:31.799 "bdev_raid_set_options", 00:04:31.799 "bdev_raid_remove_base_bdev", 00:04:31.799 "bdev_raid_add_base_bdev", 00:04:31.799 "bdev_raid_delete", 00:04:31.799 "bdev_raid_create", 00:04:31.799 "bdev_raid_get_bdevs", 00:04:31.799 "bdev_error_inject_error", 00:04:31.799 "bdev_error_delete", 00:04:31.799 "bdev_error_create", 00:04:31.799 "bdev_split_delete", 00:04:31.799 "bdev_split_create", 00:04:31.799 "bdev_delay_delete", 00:04:31.799 "bdev_delay_create", 00:04:31.799 "bdev_delay_update_latency", 00:04:31.799 "bdev_zone_block_delete", 00:04:31.799 "bdev_zone_block_create", 00:04:31.799 "blobfs_create", 00:04:31.799 "blobfs_detect", 00:04:31.799 "blobfs_set_cache_size", 00:04:31.799 "bdev_aio_delete", 00:04:31.799 "bdev_aio_rescan", 00:04:31.799 "bdev_aio_create", 00:04:31.799 "bdev_ftl_set_property", 00:04:31.799 "bdev_ftl_get_properties", 00:04:31.799 "bdev_ftl_get_stats", 00:04:31.799 "bdev_ftl_unmap", 00:04:31.799 "bdev_ftl_unload", 00:04:31.799 "bdev_ftl_delete", 00:04:31.799 "bdev_ftl_load", 00:04:31.799 "bdev_ftl_create", 00:04:31.799 "bdev_virtio_attach_controller", 00:04:31.799 "bdev_virtio_scsi_get_devices", 00:04:31.799 "bdev_virtio_detach_controller", 00:04:31.799 "bdev_virtio_blk_set_hotplug", 00:04:31.799 "bdev_iscsi_delete", 00:04:31.799 "bdev_iscsi_create", 00:04:31.799 "bdev_iscsi_set_options", 00:04:31.799 "accel_error_inject_error", 00:04:31.799 "ioat_scan_accel_module", 00:04:31.799 "dsa_scan_accel_module", 00:04:31.799 "iaa_scan_accel_module", 00:04:31.799 "vfu_virtio_create_fs_endpoint", 00:04:31.799 "vfu_virtio_create_scsi_endpoint", 00:04:31.799 "vfu_virtio_scsi_remove_target", 00:04:31.799 "vfu_virtio_scsi_add_target", 00:04:31.799 "vfu_virtio_create_blk_endpoint", 00:04:31.799 "vfu_virtio_delete_endpoint", 00:04:31.799 "keyring_file_remove_key", 00:04:31.799 "keyring_file_add_key", 00:04:31.799 "keyring_linux_set_options", 00:04:31.799 "fsdev_aio_delete", 00:04:31.799 "fsdev_aio_create", 00:04:31.799 "iscsi_get_histogram", 00:04:31.799 "iscsi_enable_histogram", 00:04:31.799 "iscsi_set_options", 00:04:31.799 "iscsi_get_auth_groups", 00:04:31.799 "iscsi_auth_group_remove_secret", 00:04:31.799 "iscsi_auth_group_add_secret", 00:04:31.799 "iscsi_delete_auth_group", 00:04:31.799 "iscsi_create_auth_group", 00:04:31.799 "iscsi_set_discovery_auth", 00:04:31.799 "iscsi_get_options", 00:04:31.799 "iscsi_target_node_request_logout", 00:04:31.799 "iscsi_target_node_set_redirect", 00:04:31.799 "iscsi_target_node_set_auth", 00:04:31.799 "iscsi_target_node_add_lun", 00:04:31.799 "iscsi_get_stats", 00:04:31.799 "iscsi_get_connections", 00:04:31.799 "iscsi_portal_group_set_auth", 00:04:31.799 "iscsi_start_portal_group", 00:04:31.799 "iscsi_delete_portal_group", 00:04:31.799 "iscsi_create_portal_group", 00:04:31.799 "iscsi_get_portal_groups", 00:04:31.799 "iscsi_delete_target_node", 00:04:31.799 "iscsi_target_node_remove_pg_ig_maps", 00:04:31.799 "iscsi_target_node_add_pg_ig_maps", 00:04:31.799 "iscsi_create_target_node", 00:04:31.799 "iscsi_get_target_nodes", 00:04:31.799 "iscsi_delete_initiator_group", 00:04:31.799 "iscsi_initiator_group_remove_initiators", 00:04:31.799 "iscsi_initiator_group_add_initiators", 00:04:31.799 "iscsi_create_initiator_group", 00:04:31.799 "iscsi_get_initiator_groups", 00:04:31.799 "nvmf_set_crdt", 00:04:31.799 "nvmf_set_config", 00:04:31.799 "nvmf_set_max_subsystems", 00:04:31.800 "nvmf_stop_mdns_prr", 00:04:31.800 "nvmf_publish_mdns_prr", 00:04:31.800 "nvmf_subsystem_get_listeners", 00:04:31.800 "nvmf_subsystem_get_qpairs", 00:04:31.800 "nvmf_subsystem_get_controllers", 00:04:31.800 "nvmf_get_stats", 00:04:31.800 "nvmf_get_transports", 00:04:31.800 "nvmf_create_transport", 00:04:31.800 "nvmf_get_targets", 00:04:31.800 "nvmf_delete_target", 00:04:31.800 "nvmf_create_target", 00:04:31.800 "nvmf_subsystem_allow_any_host", 00:04:31.800 "nvmf_subsystem_set_keys", 00:04:31.800 "nvmf_subsystem_remove_host", 00:04:31.800 "nvmf_subsystem_add_host", 00:04:31.800 "nvmf_ns_remove_host", 00:04:31.800 "nvmf_ns_add_host", 00:04:31.800 "nvmf_subsystem_remove_ns", 00:04:31.800 "nvmf_subsystem_set_ns_ana_group", 00:04:31.800 "nvmf_subsystem_add_ns", 00:04:31.800 "nvmf_subsystem_listener_set_ana_state", 00:04:31.800 "nvmf_discovery_get_referrals", 00:04:31.800 "nvmf_discovery_remove_referral", 00:04:31.800 "nvmf_discovery_add_referral", 00:04:31.800 "nvmf_subsystem_remove_listener", 00:04:31.800 "nvmf_subsystem_add_listener", 00:04:31.800 "nvmf_delete_subsystem", 00:04:31.800 "nvmf_create_subsystem", 00:04:31.800 "nvmf_get_subsystems", 00:04:31.800 "env_dpdk_get_mem_stats", 00:04:31.800 "nbd_get_disks", 00:04:31.800 "nbd_stop_disk", 00:04:31.800 "nbd_start_disk", 00:04:31.800 "ublk_recover_disk", 00:04:31.800 "ublk_get_disks", 00:04:31.800 "ublk_stop_disk", 00:04:31.800 "ublk_start_disk", 00:04:31.800 "ublk_destroy_target", 00:04:31.800 "ublk_create_target", 00:04:31.800 "virtio_blk_create_transport", 00:04:31.800 "virtio_blk_get_transports", 00:04:31.800 "vhost_controller_set_coalescing", 00:04:31.800 "vhost_get_controllers", 00:04:31.800 "vhost_delete_controller", 00:04:31.800 "vhost_create_blk_controller", 00:04:31.800 "vhost_scsi_controller_remove_target", 00:04:31.800 "vhost_scsi_controller_add_target", 00:04:31.800 "vhost_start_scsi_controller", 00:04:31.800 "vhost_create_scsi_controller", 00:04:31.800 "thread_set_cpumask", 00:04:31.800 "scheduler_set_options", 00:04:31.800 "framework_get_governor", 00:04:31.800 "framework_get_scheduler", 00:04:31.800 "framework_set_scheduler", 00:04:31.800 "framework_get_reactors", 00:04:31.800 "thread_get_io_channels", 00:04:31.800 "thread_get_pollers", 00:04:31.800 "thread_get_stats", 00:04:31.800 "framework_monitor_context_switch", 00:04:31.800 "spdk_kill_instance", 00:04:31.800 "log_enable_timestamps", 00:04:31.800 "log_get_flags", 00:04:31.800 "log_clear_flag", 00:04:31.800 "log_set_flag", 00:04:31.800 "log_get_level", 00:04:31.800 "log_set_level", 00:04:31.800 "log_get_print_level", 00:04:31.800 "log_set_print_level", 00:04:31.800 "framework_enable_cpumask_locks", 00:04:31.800 "framework_disable_cpumask_locks", 00:04:31.800 "framework_wait_init", 00:04:31.800 "framework_start_init", 00:04:31.800 "scsi_get_devices", 00:04:31.800 "bdev_get_histogram", 00:04:31.800 "bdev_enable_histogram", 00:04:31.800 "bdev_set_qos_limit", 00:04:31.800 "bdev_set_qd_sampling_period", 00:04:31.800 "bdev_get_bdevs", 00:04:31.800 "bdev_reset_iostat", 00:04:31.800 "bdev_get_iostat", 00:04:31.800 "bdev_examine", 00:04:31.800 "bdev_wait_for_examine", 00:04:31.800 "bdev_set_options", 00:04:31.800 "accel_get_stats", 00:04:31.800 "accel_set_options", 00:04:31.800 "accel_set_driver", 00:04:31.800 "accel_crypto_key_destroy", 00:04:31.800 "accel_crypto_keys_get", 00:04:31.800 "accel_crypto_key_create", 00:04:31.800 "accel_assign_opc", 00:04:31.800 "accel_get_module_info", 00:04:31.800 "accel_get_opc_assignments", 00:04:31.800 "vmd_rescan", 00:04:31.800 "vmd_remove_device", 00:04:31.800 "vmd_enable", 00:04:31.800 "sock_get_default_impl", 00:04:31.800 "sock_set_default_impl", 00:04:31.800 "sock_impl_set_options", 00:04:31.800 "sock_impl_get_options", 00:04:31.800 "iobuf_get_stats", 00:04:31.800 "iobuf_set_options", 00:04:31.800 "keyring_get_keys", 00:04:31.800 "vfu_tgt_set_base_path", 00:04:31.800 "framework_get_pci_devices", 00:04:31.800 "framework_get_config", 00:04:31.800 "framework_get_subsystems", 00:04:31.800 "fsdev_set_opts", 00:04:31.800 "fsdev_get_opts", 00:04:31.800 "trace_get_info", 00:04:31.800 "trace_get_tpoint_group_mask", 00:04:31.800 "trace_disable_tpoint_group", 00:04:31.800 "trace_enable_tpoint_group", 00:04:31.800 "trace_clear_tpoint_mask", 00:04:31.800 "trace_set_tpoint_mask", 00:04:31.800 "notify_get_notifications", 00:04:31.800 "notify_get_types", 00:04:31.800 "spdk_get_version", 00:04:31.800 "rpc_get_methods" 00:04:31.800 ] 00:04:31.800 17:53:54 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:31.800 17:53:54 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.800 17:53:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:31.800 17:53:54 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:31.800 17:53:54 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1346140 00:04:31.800 17:53:54 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1346140 ']' 00:04:31.800 17:53:54 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1346140 00:04:31.800 17:53:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:31.800 17:53:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.800 17:53:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1346140 00:04:31.800 17:53:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.800 17:53:54 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.800 17:53:54 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1346140' 00:04:31.800 killing process with pid 1346140 00:04:31.800 17:53:54 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1346140 00:04:31.800 17:53:54 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1346140 00:04:32.366 00:04:32.366 real 0m1.331s 00:04:32.366 user 0m2.363s 00:04:32.366 sys 0m0.477s 00:04:32.366 17:53:55 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.366 17:53:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:32.366 ************************************ 00:04:32.366 END TEST spdkcli_tcp 00:04:32.366 ************************************ 00:04:32.366 17:53:55 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:32.366 17:53:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.366 17:53:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.366 17:53:55 -- common/autotest_common.sh@10 -- # set +x 00:04:32.366 ************************************ 00:04:32.366 START TEST dpdk_mem_utility 00:04:32.366 ************************************ 00:04:32.366 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:32.366 * Looking for test storage... 00:04:32.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:32.366 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:32.366 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:32.366 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:32.366 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.366 17:53:55 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:32.366 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.366 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:32.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.366 --rc genhtml_branch_coverage=1 00:04:32.366 --rc genhtml_function_coverage=1 00:04:32.366 --rc genhtml_legend=1 00:04:32.366 --rc geninfo_all_blocks=1 00:04:32.366 --rc geninfo_unexecuted_blocks=1 00:04:32.366 00:04:32.366 ' 00:04:32.366 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:32.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.366 --rc genhtml_branch_coverage=1 00:04:32.366 --rc genhtml_function_coverage=1 00:04:32.366 --rc genhtml_legend=1 00:04:32.366 --rc geninfo_all_blocks=1 00:04:32.366 --rc geninfo_unexecuted_blocks=1 00:04:32.366 00:04:32.366 ' 00:04:32.366 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:32.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.366 --rc genhtml_branch_coverage=1 00:04:32.366 --rc genhtml_function_coverage=1 00:04:32.366 --rc genhtml_legend=1 00:04:32.366 --rc geninfo_all_blocks=1 00:04:32.366 --rc geninfo_unexecuted_blocks=1 00:04:32.366 00:04:32.366 ' 00:04:32.366 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:32.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.366 --rc genhtml_branch_coverage=1 00:04:32.366 --rc genhtml_function_coverage=1 00:04:32.366 --rc genhtml_legend=1 00:04:32.366 --rc geninfo_all_blocks=1 00:04:32.366 --rc geninfo_unexecuted_blocks=1 00:04:32.367 00:04:32.367 ' 00:04:32.367 17:53:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:32.367 17:53:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1346357 00:04:32.367 17:53:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:32.367 17:53:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1346357 00:04:32.367 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1346357 ']' 00:04:32.367 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.367 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.367 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.367 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.367 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:32.625 [2024-12-09 17:53:55.427139] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:04:32.625 [2024-12-09 17:53:55.427218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1346357 ] 00:04:32.625 [2024-12-09 17:53:55.497192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.625 [2024-12-09 17:53:55.553709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.882 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.882 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:32.882 17:53:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:32.882 17:53:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:32.882 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.882 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:32.882 { 00:04:32.882 "filename": "/tmp/spdk_mem_dump.txt" 00:04:32.882 } 00:04:32.882 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.882 17:53:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:32.882 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:32.882 1 heaps totaling size 818.000000 MiB 00:04:32.882 size: 818.000000 MiB heap id: 0 00:04:32.882 end heaps---------- 00:04:32.882 9 mempools totaling size 603.782043 MiB 00:04:32.882 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:32.882 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:32.882 size: 100.555481 MiB name: bdev_io_1346357 00:04:32.882 size: 50.003479 MiB name: msgpool_1346357 00:04:32.882 size: 36.509338 MiB name: fsdev_io_1346357 00:04:32.882 size: 21.763794 MiB name: PDU_Pool 00:04:32.882 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:32.882 size: 4.133484 MiB name: evtpool_1346357 00:04:32.882 size: 0.026123 MiB name: Session_Pool 00:04:32.882 end mempools------- 00:04:32.882 6 memzones totaling size 4.142822 MiB 00:04:32.882 size: 1.000366 MiB name: RG_ring_0_1346357 00:04:32.882 size: 1.000366 MiB name: RG_ring_1_1346357 00:04:32.882 size: 1.000366 MiB name: RG_ring_4_1346357 00:04:32.882 size: 1.000366 MiB name: RG_ring_5_1346357 00:04:32.882 size: 0.125366 MiB name: RG_ring_2_1346357 00:04:32.882 size: 0.015991 MiB name: RG_ring_3_1346357 00:04:32.882 end memzones------- 00:04:32.882 17:53:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:32.882 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:32.882 list of free elements. size: 10.852478 MiB 00:04:32.882 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:32.883 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:32.883 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:32.883 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:32.883 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:32.883 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:32.883 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:32.883 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:32.883 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:32.883 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:32.883 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:32.883 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:32.883 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:32.883 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:32.883 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:32.883 list of standard malloc elements. size: 199.218628 MiB 00:04:32.883 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:32.883 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:32.883 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:32.883 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:32.883 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:32.883 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:32.883 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:32.883 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:32.883 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:32.883 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:32.883 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:32.883 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:32.883 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:32.883 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:32.883 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:32.883 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:32.883 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:32.883 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:32.883 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:32.883 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:32.883 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:32.883 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:32.883 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:32.883 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:32.883 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:32.883 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:32.883 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:32.883 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:32.883 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:32.883 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:32.883 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:32.883 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:32.883 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:32.883 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:32.883 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:32.883 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:32.883 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:32.883 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:32.883 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:32.883 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:32.883 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:32.883 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:32.883 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:32.883 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:32.883 list of memzone associated elements. size: 607.928894 MiB 00:04:32.883 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:32.883 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:32.883 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:32.883 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:32.883 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:32.883 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1346357_0 00:04:32.883 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:32.883 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1346357_0 00:04:32.883 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:32.883 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1346357_0 00:04:32.883 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:32.883 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:32.883 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:32.883 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:32.883 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:32.883 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1346357_0 00:04:32.883 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:32.883 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1346357 00:04:32.883 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:32.883 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1346357 00:04:32.883 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:32.883 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:32.883 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:32.883 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:32.883 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:32.883 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:32.883 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:32.883 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:32.883 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:32.883 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1346357 00:04:32.883 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:32.883 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1346357 00:04:32.883 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:32.883 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1346357 00:04:32.883 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:32.883 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1346357 00:04:32.883 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:32.883 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1346357 00:04:32.883 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:32.883 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1346357 00:04:32.883 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:32.883 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:32.883 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:32.883 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:32.883 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:32.883 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:32.883 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:32.883 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1346357 00:04:32.883 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:32.883 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1346357 00:04:32.883 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:32.883 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:32.883 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:32.883 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:32.883 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:32.883 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1346357 00:04:32.883 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:32.883 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:32.883 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:32.883 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1346357 00:04:32.883 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:32.883 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1346357 00:04:32.883 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:32.883 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1346357 00:04:32.883 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:32.883 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:33.141 17:53:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:33.141 17:53:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1346357 00:04:33.141 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1346357 ']' 00:04:33.141 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1346357 00:04:33.141 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:33.141 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.141 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1346357 00:04:33.141 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.141 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.141 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1346357' 00:04:33.141 killing process with pid 1346357 00:04:33.141 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1346357 00:04:33.141 17:53:55 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1346357 00:04:33.400 00:04:33.400 real 0m1.144s 00:04:33.400 user 0m1.123s 00:04:33.400 sys 0m0.426s 00:04:33.400 17:53:56 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.400 17:53:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:33.400 ************************************ 00:04:33.400 END TEST dpdk_mem_utility 00:04:33.400 ************************************ 00:04:33.400 17:53:56 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:33.400 17:53:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.400 17:53:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.400 17:53:56 -- common/autotest_common.sh@10 -- # set +x 00:04:33.400 ************************************ 00:04:33.400 START TEST event 00:04:33.400 ************************************ 00:04:33.400 17:53:56 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:33.658 * Looking for test storage... 00:04:33.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:33.658 17:53:56 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:33.658 17:53:56 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:33.659 17:53:56 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:33.659 17:53:56 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:33.659 17:53:56 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.659 17:53:56 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.659 17:53:56 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.659 17:53:56 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.659 17:53:56 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.659 17:53:56 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.659 17:53:56 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.659 17:53:56 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.659 17:53:56 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.659 17:53:56 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.659 17:53:56 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.659 17:53:56 event -- scripts/common.sh@344 -- # case "$op" in 00:04:33.659 17:53:56 event -- scripts/common.sh@345 -- # : 1 00:04:33.659 17:53:56 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.659 17:53:56 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.659 17:53:56 event -- scripts/common.sh@365 -- # decimal 1 00:04:33.659 17:53:56 event -- scripts/common.sh@353 -- # local d=1 00:04:33.659 17:53:56 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.659 17:53:56 event -- scripts/common.sh@355 -- # echo 1 00:04:33.659 17:53:56 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.659 17:53:56 event -- scripts/common.sh@366 -- # decimal 2 00:04:33.659 17:53:56 event -- scripts/common.sh@353 -- # local d=2 00:04:33.659 17:53:56 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.659 17:53:56 event -- scripts/common.sh@355 -- # echo 2 00:04:33.659 17:53:56 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.659 17:53:56 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.659 17:53:56 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.659 17:53:56 event -- scripts/common.sh@368 -- # return 0 00:04:33.659 17:53:56 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.659 17:53:56 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:33.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.659 --rc genhtml_branch_coverage=1 00:04:33.659 --rc genhtml_function_coverage=1 00:04:33.659 --rc genhtml_legend=1 00:04:33.659 --rc geninfo_all_blocks=1 00:04:33.659 --rc geninfo_unexecuted_blocks=1 00:04:33.659 00:04:33.659 ' 00:04:33.659 17:53:56 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:33.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.659 --rc genhtml_branch_coverage=1 00:04:33.659 --rc genhtml_function_coverage=1 00:04:33.659 --rc genhtml_legend=1 00:04:33.659 --rc geninfo_all_blocks=1 00:04:33.659 --rc geninfo_unexecuted_blocks=1 00:04:33.659 00:04:33.659 ' 00:04:33.659 17:53:56 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:33.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.659 --rc genhtml_branch_coverage=1 00:04:33.659 --rc genhtml_function_coverage=1 00:04:33.659 --rc genhtml_legend=1 00:04:33.659 --rc geninfo_all_blocks=1 00:04:33.659 --rc geninfo_unexecuted_blocks=1 00:04:33.659 00:04:33.659 ' 00:04:33.659 17:53:56 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:33.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.659 --rc genhtml_branch_coverage=1 00:04:33.659 --rc genhtml_function_coverage=1 00:04:33.659 --rc genhtml_legend=1 00:04:33.659 --rc geninfo_all_blocks=1 00:04:33.659 --rc geninfo_unexecuted_blocks=1 00:04:33.659 00:04:33.659 ' 00:04:33.659 17:53:56 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:33.659 17:53:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:33.659 17:53:56 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:33.659 17:53:56 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:33.659 17:53:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.659 17:53:56 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.659 ************************************ 00:04:33.659 START TEST event_perf 00:04:33.659 ************************************ 00:04:33.659 17:53:56 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:33.659 Running I/O for 1 seconds...[2024-12-09 17:53:56.614094] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:04:33.659 [2024-12-09 17:53:56.614156] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1346668 ] 00:04:33.659 [2024-12-09 17:53:56.679670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:33.917 [2024-12-09 17:53:56.739824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.917 [2024-12-09 17:53:56.739887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:33.917 [2024-12-09 17:53:56.739952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:33.917 [2024-12-09 17:53:56.739955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.850 Running I/O for 1 seconds... 00:04:34.850 lcore 0: 227837 00:04:34.850 lcore 1: 227835 00:04:34.850 lcore 2: 227835 00:04:34.850 lcore 3: 227836 00:04:34.850 done. 00:04:34.850 00:04:34.850 real 0m1.205s 00:04:34.850 user 0m4.134s 00:04:34.850 sys 0m0.065s 00:04:34.850 17:53:57 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.850 17:53:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:34.850 ************************************ 00:04:34.850 END TEST event_perf 00:04:34.850 ************************************ 00:04:34.850 17:53:57 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:34.850 17:53:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:34.850 17:53:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.850 17:53:57 event -- common/autotest_common.sh@10 -- # set +x 00:04:34.850 ************************************ 00:04:34.850 START TEST event_reactor 00:04:34.850 ************************************ 00:04:34.850 17:53:57 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:34.850 [2024-12-09 17:53:57.865651] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:04:34.850 [2024-12-09 17:53:57.865709] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1346825 ] 00:04:35.108 [2024-12-09 17:53:57.931528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.108 [2024-12-09 17:53:57.986698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.042 test_start 00:04:36.042 oneshot 00:04:36.042 tick 100 00:04:36.042 tick 100 00:04:36.042 tick 250 00:04:36.042 tick 100 00:04:36.042 tick 100 00:04:36.042 tick 100 00:04:36.042 tick 250 00:04:36.042 tick 500 00:04:36.042 tick 100 00:04:36.042 tick 100 00:04:36.042 tick 250 00:04:36.042 tick 100 00:04:36.042 tick 100 00:04:36.042 test_end 00:04:36.042 00:04:36.042 real 0m1.197s 00:04:36.042 user 0m1.126s 00:04:36.042 sys 0m0.067s 00:04:36.042 17:53:59 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.042 17:53:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:36.042 ************************************ 00:04:36.042 END TEST event_reactor 00:04:36.042 ************************************ 00:04:36.042 17:53:59 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:36.042 17:53:59 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:36.042 17:53:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.042 17:53:59 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.300 ************************************ 00:04:36.300 START TEST event_reactor_perf 00:04:36.300 ************************************ 00:04:36.300 17:53:59 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:36.300 [2024-12-09 17:53:59.117185] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:04:36.300 [2024-12-09 17:53:59.117252] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1346977 ] 00:04:36.300 [2024-12-09 17:53:59.180793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.300 [2024-12-09 17:53:59.235091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.673 test_start 00:04:37.673 test_end 00:04:37.673 Performance: 442428 events per second 00:04:37.673 00:04:37.673 real 0m1.197s 00:04:37.673 user 0m1.135s 00:04:37.673 sys 0m0.057s 00:04:37.673 17:54:00 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.673 17:54:00 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:37.673 ************************************ 00:04:37.673 END TEST event_reactor_perf 00:04:37.673 ************************************ 00:04:37.673 17:54:00 event -- event/event.sh@49 -- # uname -s 00:04:37.673 17:54:00 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:37.673 17:54:00 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:37.673 17:54:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.673 17:54:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.673 17:54:00 event -- common/autotest_common.sh@10 -- # set +x 00:04:37.673 ************************************ 00:04:37.673 START TEST event_scheduler 00:04:37.673 ************************************ 00:04:37.673 17:54:00 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:37.673 * Looking for test storage... 00:04:37.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:37.673 17:54:00 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:37.673 17:54:00 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:37.673 17:54:00 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:37.673 17:54:00 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.673 17:54:00 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:37.673 17:54:00 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.673 17:54:00 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:37.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.673 --rc genhtml_branch_coverage=1 00:04:37.673 --rc genhtml_function_coverage=1 00:04:37.673 --rc genhtml_legend=1 00:04:37.673 --rc geninfo_all_blocks=1 00:04:37.673 --rc geninfo_unexecuted_blocks=1 00:04:37.673 00:04:37.673 ' 00:04:37.673 17:54:00 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:37.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.673 --rc genhtml_branch_coverage=1 00:04:37.673 --rc genhtml_function_coverage=1 00:04:37.673 --rc genhtml_legend=1 00:04:37.673 --rc geninfo_all_blocks=1 00:04:37.673 --rc geninfo_unexecuted_blocks=1 00:04:37.673 00:04:37.673 ' 00:04:37.673 17:54:00 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:37.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.673 --rc genhtml_branch_coverage=1 00:04:37.673 --rc genhtml_function_coverage=1 00:04:37.673 --rc genhtml_legend=1 00:04:37.673 --rc geninfo_all_blocks=1 00:04:37.673 --rc geninfo_unexecuted_blocks=1 00:04:37.673 00:04:37.673 ' 00:04:37.673 17:54:00 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:37.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.673 --rc genhtml_branch_coverage=1 00:04:37.673 --rc genhtml_function_coverage=1 00:04:37.673 --rc genhtml_legend=1 00:04:37.673 --rc geninfo_all_blocks=1 00:04:37.673 --rc geninfo_unexecuted_blocks=1 00:04:37.673 00:04:37.673 ' 00:04:37.673 17:54:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:37.673 17:54:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1347212 00:04:37.673 17:54:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:37.673 17:54:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.673 17:54:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1347212 00:04:37.673 17:54:00 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1347212 ']' 00:04:37.673 17:54:00 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.673 17:54:00 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.673 17:54:00 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.673 17:54:00 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.673 17:54:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.673 [2024-12-09 17:54:00.536145] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:04:37.673 [2024-12-09 17:54:00.536242] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1347212 ] 00:04:37.673 [2024-12-09 17:54:00.604649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:37.673 [2024-12-09 17:54:00.670491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.673 [2024-12-09 17:54:00.670550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.673 [2024-12-09 17:54:00.670615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:37.673 [2024-12-09 17:54:00.670620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:37.932 17:54:00 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.932 17:54:00 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:37.932 17:54:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:37.932 17:54:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.932 17:54:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.932 [2024-12-09 17:54:00.788541] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:37.932 [2024-12-09 17:54:00.788594] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:37.932 [2024-12-09 17:54:00.788630] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:37.932 [2024-12-09 17:54:00.788656] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:37.932 [2024-12-09 17:54:00.788672] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:37.932 17:54:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.932 17:54:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:37.932 17:54:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.932 17:54:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.932 [2024-12-09 17:54:00.887109] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:37.932 17:54:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.932 17:54:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:37.932 17:54:00 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.932 17:54:00 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.932 17:54:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.932 ************************************ 00:04:37.932 START TEST scheduler_create_thread 00:04:37.932 ************************************ 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.932 2 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.932 3 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.932 4 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.932 5 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.932 6 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.932 7 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.932 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.190 8 00:04:38.190 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.190 17:54:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:38.190 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.190 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.190 9 00:04:38.190 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.190 17:54:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:38.190 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.190 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.190 10 00:04:38.190 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.190 17:54:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:38.190 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.190 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.190 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.190 17:54:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:38.190 17:54:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:38.190 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.190 17:54:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.190 17:54:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.190 17:54:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:38.190 17:54:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.190 17:54:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.190 17:54:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.190 17:54:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:38.190 17:54:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:38.190 17:54:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.190 17:54:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.754 17:54:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.754 00:04:38.754 real 0m0.591s 00:04:38.754 user 0m0.007s 00:04:38.754 sys 0m0.007s 00:04:38.754 17:54:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.754 17:54:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.754 ************************************ 00:04:38.754 END TEST scheduler_create_thread 00:04:38.754 ************************************ 00:04:38.754 17:54:01 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:38.754 17:54:01 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1347212 00:04:38.754 17:54:01 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1347212 ']' 00:04:38.754 17:54:01 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1347212 00:04:38.754 17:54:01 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:38.754 17:54:01 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.754 17:54:01 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1347212 00:04:38.754 17:54:01 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:38.754 17:54:01 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:38.754 17:54:01 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1347212' 00:04:38.754 killing process with pid 1347212 00:04:38.754 17:54:01 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1347212 00:04:38.754 17:54:01 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1347212 00:04:39.012 [2024-12-09 17:54:01.983272] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:39.269 00:04:39.269 real 0m1.854s 00:04:39.269 user 0m2.562s 00:04:39.269 sys 0m0.351s 00:04:39.269 17:54:02 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.269 17:54:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:39.269 ************************************ 00:04:39.269 END TEST event_scheduler 00:04:39.269 ************************************ 00:04:39.269 17:54:02 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:39.269 17:54:02 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:39.269 17:54:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.269 17:54:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.269 17:54:02 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.269 ************************************ 00:04:39.269 START TEST app_repeat 00:04:39.269 ************************************ 00:04:39.269 17:54:02 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:39.269 17:54:02 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.269 17:54:02 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.269 17:54:02 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:39.269 17:54:02 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.269 17:54:02 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:39.269 17:54:02 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:39.269 17:54:02 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:39.269 17:54:02 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1347598 00:04:39.269 17:54:02 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:39.269 17:54:02 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.269 17:54:02 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1347598' 00:04:39.269 Process app_repeat pid: 1347598 00:04:39.269 17:54:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:39.269 17:54:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:39.269 spdk_app_start Round 0 00:04:39.269 17:54:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1347598 /var/tmp/spdk-nbd.sock 00:04:39.269 17:54:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1347598 ']' 00:04:39.269 17:54:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:39.269 17:54:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.269 17:54:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:39.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:39.269 17:54:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.269 17:54:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:39.269 [2024-12-09 17:54:02.282186] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:04:39.269 [2024-12-09 17:54:02.282246] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1347598 ] 00:04:39.527 [2024-12-09 17:54:02.350522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:39.527 [2024-12-09 17:54:02.413106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.527 [2024-12-09 17:54:02.413111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.527 17:54:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.527 17:54:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:39.527 17:54:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.785 Malloc0 00:04:40.042 17:54:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.300 Malloc1 00:04:40.300 17:54:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.300 17:54:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.300 17:54:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.300 17:54:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:40.300 17:54:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.300 17:54:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:40.300 17:54:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.300 17:54:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.300 17:54:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.300 17:54:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:40.300 17:54:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.300 17:54:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:40.300 17:54:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:40.300 17:54:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:40.300 17:54:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.300 17:54:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:40.558 /dev/nbd0 00:04:40.558 17:54:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:40.558 17:54:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:40.558 17:54:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:40.558 17:54:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:40.558 17:54:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:40.558 17:54:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:40.558 17:54:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:40.558 17:54:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:40.558 17:54:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:40.558 17:54:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:40.558 17:54:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.558 1+0 records in 00:04:40.558 1+0 records out 00:04:40.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185928 s, 22.0 MB/s 00:04:40.558 17:54:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.558 17:54:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:40.558 17:54:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.558 17:54:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:40.558 17:54:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:40.558 17:54:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.558 17:54:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.558 17:54:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:40.816 /dev/nbd1 00:04:40.816 17:54:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:40.816 17:54:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:40.816 17:54:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:40.816 17:54:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:40.816 17:54:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:40.816 17:54:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:40.816 17:54:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:40.816 17:54:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:40.816 17:54:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:40.816 17:54:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:40.816 17:54:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.816 1+0 records in 00:04:40.816 1+0 records out 00:04:40.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213295 s, 19.2 MB/s 00:04:40.816 17:54:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.816 17:54:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:40.816 17:54:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.816 17:54:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:40.816 17:54:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:40.816 17:54:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.816 17:54:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.816 17:54:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.816 17:54:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.816 17:54:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.074 17:54:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:41.074 { 00:04:41.074 "nbd_device": "/dev/nbd0", 00:04:41.074 "bdev_name": "Malloc0" 00:04:41.074 }, 00:04:41.074 { 00:04:41.074 "nbd_device": "/dev/nbd1", 00:04:41.074 "bdev_name": "Malloc1" 00:04:41.074 } 00:04:41.074 ]' 00:04:41.074 17:54:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:41.074 { 00:04:41.074 "nbd_device": "/dev/nbd0", 00:04:41.074 "bdev_name": "Malloc0" 00:04:41.074 }, 00:04:41.074 { 00:04:41.074 "nbd_device": "/dev/nbd1", 00:04:41.074 "bdev_name": "Malloc1" 00:04:41.074 } 00:04:41.074 ]' 00:04:41.074 17:54:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.074 17:54:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:41.074 /dev/nbd1' 00:04:41.074 17:54:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:41.074 /dev/nbd1' 00:04:41.074 17:54:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.074 17:54:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:41.074 17:54:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:41.074 17:54:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:41.074 17:54:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:41.074 17:54:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:41.074 17:54:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.074 17:54:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.074 17:54:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:41.074 17:54:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.074 17:54:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:41.074 17:54:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:41.074 256+0 records in 00:04:41.074 256+0 records out 00:04:41.074 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515124 s, 204 MB/s 00:04:41.074 17:54:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.074 17:54:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:41.332 256+0 records in 00:04:41.332 256+0 records out 00:04:41.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202508 s, 51.8 MB/s 00:04:41.332 17:54:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.332 17:54:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:41.332 256+0 records in 00:04:41.332 256+0 records out 00:04:41.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217299 s, 48.3 MB/s 00:04:41.332 17:54:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:41.332 17:54:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.332 17:54:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.332 17:54:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:41.332 17:54:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.332 17:54:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:41.332 17:54:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:41.332 17:54:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.332 17:54:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:41.332 17:54:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.332 17:54:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:41.332 17:54:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.332 17:54:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:41.332 17:54:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.332 17:54:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.332 17:54:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:41.332 17:54:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:41.332 17:54:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.332 17:54:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:41.590 17:54:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:41.590 17:54:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:41.590 17:54:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:41.590 17:54:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.590 17:54:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.590 17:54:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:41.590 17:54:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.590 17:54:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.590 17:54:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.590 17:54:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:41.847 17:54:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:41.847 17:54:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:41.847 17:54:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:41.847 17:54:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.847 17:54:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.847 17:54:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:41.847 17:54:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.847 17:54:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.847 17:54:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.847 17:54:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.847 17:54:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:42.105 17:54:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:42.105 17:54:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:42.105 17:54:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:42.105 17:54:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:42.105 17:54:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:42.105 17:54:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:42.105 17:54:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:42.105 17:54:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:42.105 17:54:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:42.105 17:54:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:42.105 17:54:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:42.105 17:54:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:42.105 17:54:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:42.671 17:54:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:42.671 [2024-12-09 17:54:05.615917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.671 [2024-12-09 17:54:05.669961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.671 [2024-12-09 17:54:05.669961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.929 [2024-12-09 17:54:05.723576] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:42.929 [2024-12-09 17:54:05.723635] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:45.454 17:54:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:45.454 17:54:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:45.454 spdk_app_start Round 1 00:04:45.454 17:54:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1347598 /var/tmp/spdk-nbd.sock 00:04:45.454 17:54:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1347598 ']' 00:04:45.454 17:54:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:45.454 17:54:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.455 17:54:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:45.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:45.455 17:54:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.455 17:54:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:45.712 17:54:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.712 17:54:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:45.712 17:54:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.985 Malloc0 00:04:45.985 17:54:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:46.554 Malloc1 00:04:46.554 17:54:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.554 17:54:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.554 17:54:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.554 17:54:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:46.554 17:54:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.554 17:54:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:46.554 17:54:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.554 17:54:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.554 17:54:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.554 17:54:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:46.554 17:54:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.554 17:54:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:46.554 17:54:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:46.554 17:54:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:46.554 17:54:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.554 17:54:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:46.554 /dev/nbd0 00:04:46.854 17:54:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:46.854 17:54:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:46.854 17:54:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:46.854 17:54:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:46.854 17:54:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:46.854 17:54:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:46.854 17:54:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:46.854 17:54:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:46.854 17:54:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:46.854 17:54:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:46.854 17:54:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.854 1+0 records in 00:04:46.854 1+0 records out 00:04:46.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000158254 s, 25.9 MB/s 00:04:46.854 17:54:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.854 17:54:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:46.854 17:54:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.854 17:54:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:46.854 17:54:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:46.854 17:54:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.854 17:54:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.854 17:54:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:47.138 /dev/nbd1 00:04:47.138 17:54:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:47.138 17:54:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:47.138 17:54:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:47.138 17:54:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:47.138 17:54:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:47.138 17:54:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:47.138 17:54:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:47.138 17:54:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:47.138 17:54:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:47.138 17:54:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:47.138 17:54:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.138 1+0 records in 00:04:47.138 1+0 records out 00:04:47.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210484 s, 19.5 MB/s 00:04:47.138 17:54:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.138 17:54:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:47.138 17:54:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.138 17:54:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:47.138 17:54:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:47.138 17:54:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.138 17:54:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.138 17:54:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.138 17:54:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.138 17:54:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:47.397 { 00:04:47.397 "nbd_device": "/dev/nbd0", 00:04:47.397 "bdev_name": "Malloc0" 00:04:47.397 }, 00:04:47.397 { 00:04:47.397 "nbd_device": "/dev/nbd1", 00:04:47.397 "bdev_name": "Malloc1" 00:04:47.397 } 00:04:47.397 ]' 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:47.397 { 00:04:47.397 "nbd_device": "/dev/nbd0", 00:04:47.397 "bdev_name": "Malloc0" 00:04:47.397 }, 00:04:47.397 { 00:04:47.397 "nbd_device": "/dev/nbd1", 00:04:47.397 "bdev_name": "Malloc1" 00:04:47.397 } 00:04:47.397 ]' 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:47.397 /dev/nbd1' 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:47.397 /dev/nbd1' 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:47.397 256+0 records in 00:04:47.397 256+0 records out 00:04:47.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513832 s, 204 MB/s 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:47.397 256+0 records in 00:04:47.397 256+0 records out 00:04:47.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211004 s, 49.7 MB/s 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:47.397 256+0 records in 00:04:47.397 256+0 records out 00:04:47.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022295 s, 47.0 MB/s 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.397 17:54:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:47.656 17:54:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:47.656 17:54:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:47.656 17:54:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:47.656 17:54:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.656 17:54:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.656 17:54:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:47.656 17:54:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.656 17:54:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.656 17:54:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.656 17:54:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:47.914 17:54:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:47.914 17:54:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:47.914 17:54:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:47.914 17:54:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.914 17:54:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.914 17:54:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:47.914 17:54:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.914 17:54:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.914 17:54:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.914 17:54:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.914 17:54:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.172 17:54:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:48.172 17:54:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:48.172 17:54:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.429 17:54:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:48.430 17:54:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:48.430 17:54:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.430 17:54:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:48.430 17:54:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:48.430 17:54:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:48.430 17:54:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:48.430 17:54:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:48.430 17:54:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:48.430 17:54:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:48.687 17:54:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:48.945 [2024-12-09 17:54:11.754514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.945 [2024-12-09 17:54:11.807834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.945 [2024-12-09 17:54:11.807834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.945 [2024-12-09 17:54:11.866994] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:48.945 [2024-12-09 17:54:11.867065] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:52.224 17:54:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:52.224 17:54:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:52.224 spdk_app_start Round 2 00:04:52.224 17:54:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1347598 /var/tmp/spdk-nbd.sock 00:04:52.224 17:54:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1347598 ']' 00:04:52.224 17:54:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:52.224 17:54:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.224 17:54:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:52.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:52.224 17:54:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.224 17:54:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.224 17:54:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.224 17:54:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:52.224 17:54:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.224 Malloc0 00:04:52.224 17:54:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.481 Malloc1 00:04:52.481 17:54:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.481 17:54:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.481 17:54:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.481 17:54:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:52.481 17:54:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.481 17:54:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:52.481 17:54:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.481 17:54:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.481 17:54:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.481 17:54:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:52.481 17:54:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.481 17:54:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:52.481 17:54:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:52.481 17:54:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:52.481 17:54:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.481 17:54:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:52.738 /dev/nbd0 00:04:52.738 17:54:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:52.738 17:54:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:52.738 17:54:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:52.738 17:54:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:52.738 17:54:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:52.738 17:54:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:52.738 17:54:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:52.738 17:54:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:52.738 17:54:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:52.738 17:54:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:52.738 17:54:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.738 1+0 records in 00:04:52.738 1+0 records out 00:04:52.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212756 s, 19.3 MB/s 00:04:52.738 17:54:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.738 17:54:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:52.738 17:54:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.738 17:54:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:52.738 17:54:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:52.738 17:54:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.738 17:54:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.738 17:54:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:52.996 /dev/nbd1 00:04:52.996 17:54:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:52.996 17:54:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:52.996 17:54:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:52.996 17:54:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:52.996 17:54:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:52.996 17:54:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:52.996 17:54:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:52.996 17:54:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:52.996 17:54:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:52.996 17:54:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:52.996 17:54:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.996 1+0 records in 00:04:52.996 1+0 records out 00:04:52.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274431 s, 14.9 MB/s 00:04:52.996 17:54:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.996 17:54:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:52.996 17:54:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.253 17:54:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:53.253 17:54:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:53.253 17:54:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.253 17:54:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.253 17:54:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.253 17:54:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.253 17:54:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:53.511 { 00:04:53.511 "nbd_device": "/dev/nbd0", 00:04:53.511 "bdev_name": "Malloc0" 00:04:53.511 }, 00:04:53.511 { 00:04:53.511 "nbd_device": "/dev/nbd1", 00:04:53.511 "bdev_name": "Malloc1" 00:04:53.511 } 00:04:53.511 ]' 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:53.511 { 00:04:53.511 "nbd_device": "/dev/nbd0", 00:04:53.511 "bdev_name": "Malloc0" 00:04:53.511 }, 00:04:53.511 { 00:04:53.511 "nbd_device": "/dev/nbd1", 00:04:53.511 "bdev_name": "Malloc1" 00:04:53.511 } 00:04:53.511 ]' 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:53.511 /dev/nbd1' 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:53.511 /dev/nbd1' 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:53.511 256+0 records in 00:04:53.511 256+0 records out 00:04:53.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00387076 s, 271 MB/s 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:53.511 256+0 records in 00:04:53.511 256+0 records out 00:04:53.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198804 s, 52.7 MB/s 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:53.511 256+0 records in 00:04:53.511 256+0 records out 00:04:53.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221482 s, 47.3 MB/s 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.511 17:54:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:53.512 17:54:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.512 17:54:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:53.512 17:54:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.512 17:54:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.512 17:54:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:53.512 17:54:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:53.512 17:54:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.512 17:54:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:53.770 17:54:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:53.770 17:54:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:53.770 17:54:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:53.770 17:54:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.770 17:54:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.770 17:54:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:53.770 17:54:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.770 17:54:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.770 17:54:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.770 17:54:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:54.027 17:54:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:54.027 17:54:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:54.027 17:54:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:54.027 17:54:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.027 17:54:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.027 17:54:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:54.027 17:54:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.027 17:54:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.027 17:54:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.028 17:54:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.028 17:54:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.285 17:54:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:54.285 17:54:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:54.285 17:54:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.285 17:54:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:54.285 17:54:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:54.285 17:54:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.285 17:54:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:54.286 17:54:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:54.286 17:54:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:54.286 17:54:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:54.286 17:54:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:54.286 17:54:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:54.286 17:54:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:54.854 17:54:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:54.854 [2024-12-09 17:54:17.826755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.854 [2024-12-09 17:54:17.881501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.854 [2024-12-09 17:54:17.881504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.111 [2024-12-09 17:54:17.937066] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:55.111 [2024-12-09 17:54:17.937134] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:57.635 17:54:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1347598 /var/tmp/spdk-nbd.sock 00:04:57.635 17:54:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1347598 ']' 00:04:57.635 17:54:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.635 17:54:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.635 17:54:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.635 17:54:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.635 17:54:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.893 17:54:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.893 17:54:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:57.893 17:54:20 event.app_repeat -- event/event.sh@39 -- # killprocess 1347598 00:04:57.893 17:54:20 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1347598 ']' 00:04:57.893 17:54:20 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1347598 00:04:57.893 17:54:20 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:57.893 17:54:20 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.893 17:54:20 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1347598 00:04:57.893 17:54:20 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.893 17:54:20 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.893 17:54:20 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1347598' 00:04:57.893 killing process with pid 1347598 00:04:57.893 17:54:20 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1347598 00:04:57.893 17:54:20 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1347598 00:04:58.153 spdk_app_start is called in Round 0. 00:04:58.153 Shutdown signal received, stop current app iteration 00:04:58.153 Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 reinitialization... 00:04:58.153 spdk_app_start is called in Round 1. 00:04:58.153 Shutdown signal received, stop current app iteration 00:04:58.153 Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 reinitialization... 00:04:58.153 spdk_app_start is called in Round 2. 00:04:58.153 Shutdown signal received, stop current app iteration 00:04:58.153 Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 reinitialization... 00:04:58.153 spdk_app_start is called in Round 3. 00:04:58.153 Shutdown signal received, stop current app iteration 00:04:58.153 17:54:21 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:58.153 17:54:21 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:58.153 00:04:58.153 real 0m18.852s 00:04:58.153 user 0m41.702s 00:04:58.153 sys 0m3.278s 00:04:58.153 17:54:21 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.153 17:54:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:58.153 ************************************ 00:04:58.153 END TEST app_repeat 00:04:58.153 ************************************ 00:04:58.153 17:54:21 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:58.153 17:54:21 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:58.153 17:54:21 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.153 17:54:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.153 17:54:21 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.153 ************************************ 00:04:58.153 START TEST cpu_locks 00:04:58.153 ************************************ 00:04:58.153 17:54:21 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:58.412 * Looking for test storage... 00:04:58.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:58.412 17:54:21 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:58.412 17:54:21 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:04:58.412 17:54:21 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.412 17:54:21 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.412 17:54:21 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:58.412 17:54:21 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.412 17:54:21 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.412 --rc genhtml_branch_coverage=1 00:04:58.412 --rc genhtml_function_coverage=1 00:04:58.412 --rc genhtml_legend=1 00:04:58.412 --rc geninfo_all_blocks=1 00:04:58.412 --rc geninfo_unexecuted_blocks=1 00:04:58.412 00:04:58.412 ' 00:04:58.412 17:54:21 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.412 --rc genhtml_branch_coverage=1 00:04:58.412 --rc genhtml_function_coverage=1 00:04:58.412 --rc genhtml_legend=1 00:04:58.412 --rc geninfo_all_blocks=1 00:04:58.412 --rc geninfo_unexecuted_blocks=1 00:04:58.412 00:04:58.412 ' 00:04:58.412 17:54:21 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.412 --rc genhtml_branch_coverage=1 00:04:58.412 --rc genhtml_function_coverage=1 00:04:58.412 --rc genhtml_legend=1 00:04:58.412 --rc geninfo_all_blocks=1 00:04:58.412 --rc geninfo_unexecuted_blocks=1 00:04:58.412 00:04:58.412 ' 00:04:58.412 17:54:21 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.412 --rc genhtml_branch_coverage=1 00:04:58.412 --rc genhtml_function_coverage=1 00:04:58.412 --rc genhtml_legend=1 00:04:58.412 --rc geninfo_all_blocks=1 00:04:58.412 --rc geninfo_unexecuted_blocks=1 00:04:58.412 00:04:58.412 ' 00:04:58.412 17:54:21 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:58.412 17:54:21 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:58.412 17:54:21 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:58.412 17:54:21 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:58.412 17:54:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.412 17:54:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.412 17:54:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.412 ************************************ 00:04:58.412 START TEST default_locks 00:04:58.412 ************************************ 00:04:58.412 17:54:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:58.412 17:54:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1350590 00:04:58.412 17:54:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.412 17:54:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1350590 00:04:58.412 17:54:21 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1350590 ']' 00:04:58.412 17:54:21 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.412 17:54:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.412 17:54:21 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.412 17:54:21 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.412 17:54:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.412 [2024-12-09 17:54:21.385404] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:04:58.412 [2024-12-09 17:54:21.385487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1350590 ] 00:04:58.671 [2024-12-09 17:54:21.453435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.671 [2024-12-09 17:54:21.511043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.929 17:54:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.929 17:54:21 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:58.929 17:54:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1350590 00:04:58.929 17:54:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1350590 00:04:58.929 17:54:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:59.187 lslocks: write error 00:04:59.187 17:54:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1350590 00:04:59.187 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1350590 ']' 00:04:59.187 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1350590 00:04:59.187 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:59.188 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.188 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1350590 00:04:59.188 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.188 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.188 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1350590' 00:04:59.188 killing process with pid 1350590 00:04:59.188 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1350590 00:04:59.188 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1350590 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1350590 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1350590 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1350590 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1350590 ']' 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1350590) - No such process 00:04:59.754 ERROR: process (pid: 1350590) is no longer running 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:59.754 00:04:59.754 real 0m1.283s 00:04:59.754 user 0m1.253s 00:04:59.754 sys 0m0.543s 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.754 17:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.754 ************************************ 00:04:59.754 END TEST default_locks 00:04:59.754 ************************************ 00:04:59.754 17:54:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:59.754 17:54:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.754 17:54:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.754 17:54:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.754 ************************************ 00:04:59.754 START TEST default_locks_via_rpc 00:04:59.754 ************************************ 00:04:59.754 17:54:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:59.754 17:54:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1350762 00:04:59.754 17:54:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.754 17:54:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1350762 00:04:59.754 17:54:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1350762 ']' 00:04:59.754 17:54:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.754 17:54:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.754 17:54:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.754 17:54:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.754 17:54:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.754 [2024-12-09 17:54:22.719894] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:04:59.754 [2024-12-09 17:54:22.719992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1350762 ] 00:04:59.754 [2024-12-09 17:54:22.786391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.013 [2024-12-09 17:54:22.847084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.271 17:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.271 17:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:00.271 17:54:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:00.271 17:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.271 17:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.271 17:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.271 17:54:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:00.271 17:54:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:00.271 17:54:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:00.271 17:54:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:00.271 17:54:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:00.271 17:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.271 17:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.271 17:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.271 17:54:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1350762 00:05:00.271 17:54:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1350762 00:05:00.271 17:54:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.529 17:54:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1350762 00:05:00.529 17:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1350762 ']' 00:05:00.529 17:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1350762 00:05:00.529 17:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:00.529 17:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.529 17:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1350762 00:05:00.529 17:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.529 17:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.529 17:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1350762' 00:05:00.529 killing process with pid 1350762 00:05:00.529 17:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1350762 00:05:00.529 17:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1350762 00:05:01.096 00:05:01.096 real 0m1.177s 00:05:01.096 user 0m1.145s 00:05:01.096 sys 0m0.501s 00:05:01.096 17:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.096 17:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.096 ************************************ 00:05:01.096 END TEST default_locks_via_rpc 00:05:01.096 ************************************ 00:05:01.096 17:54:23 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:01.096 17:54:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.096 17:54:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.096 17:54:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.096 ************************************ 00:05:01.096 START TEST non_locking_app_on_locked_coremask 00:05:01.096 ************************************ 00:05:01.096 17:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:01.096 17:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1350924 00:05:01.096 17:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.096 17:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1350924 /var/tmp/spdk.sock 00:05:01.096 17:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1350924 ']' 00:05:01.096 17:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.096 17:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.097 17:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.097 17:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.097 17:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.097 [2024-12-09 17:54:23.944505] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:05:01.097 [2024-12-09 17:54:23.944607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1350924 ] 00:05:01.097 [2024-12-09 17:54:24.005804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.097 [2024-12-09 17:54:24.059439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.355 17:54:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.355 17:54:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:01.355 17:54:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1350928 00:05:01.355 17:54:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:01.355 17:54:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1350928 /var/tmp/spdk2.sock 00:05:01.355 17:54:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1350928 ']' 00:05:01.355 17:54:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:01.355 17:54:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.355 17:54:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:01.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:01.355 17:54:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.355 17:54:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.355 [2024-12-09 17:54:24.366477] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:05:01.355 [2024-12-09 17:54:24.366573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1350928 ] 00:05:01.613 [2024-12-09 17:54:24.467686] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:01.613 [2024-12-09 17:54:24.467713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.613 [2024-12-09 17:54:24.581212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.548 17:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.548 17:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:02.548 17:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1350924 00:05:02.548 17:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1350924 00:05:02.548 17:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:02.806 lslocks: write error 00:05:02.806 17:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1350924 00:05:02.806 17:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1350924 ']' 00:05:02.806 17:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1350924 00:05:02.806 17:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:02.806 17:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.806 17:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1350924 00:05:02.806 17:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.806 17:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.806 17:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1350924' 00:05:02.806 killing process with pid 1350924 00:05:02.806 17:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1350924 00:05:02.806 17:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1350924 00:05:03.740 17:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1350928 00:05:03.740 17:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1350928 ']' 00:05:03.740 17:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1350928 00:05:03.740 17:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:03.740 17:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.740 17:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1350928 00:05:03.740 17:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.740 17:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.740 17:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1350928' 00:05:03.740 killing process with pid 1350928 00:05:03.740 17:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1350928 00:05:03.740 17:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1350928 00:05:04.306 00:05:04.306 real 0m3.174s 00:05:04.306 user 0m3.406s 00:05:04.306 sys 0m1.024s 00:05:04.306 17:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.306 17:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.306 ************************************ 00:05:04.306 END TEST non_locking_app_on_locked_coremask 00:05:04.306 ************************************ 00:05:04.306 17:54:27 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:04.306 17:54:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.306 17:54:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.306 17:54:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.306 ************************************ 00:05:04.306 START TEST locking_app_on_unlocked_coremask 00:05:04.306 ************************************ 00:05:04.306 17:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:04.306 17:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1351354 00:05:04.306 17:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:04.306 17:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1351354 /var/tmp/spdk.sock 00:05:04.306 17:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1351354 ']' 00:05:04.306 17:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.306 17:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.306 17:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.306 17:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.306 17:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.306 [2024-12-09 17:54:27.172080] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:05:04.306 [2024-12-09 17:54:27.172188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1351354 ] 00:05:04.306 [2024-12-09 17:54:27.238274] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:04.306 [2024-12-09 17:54:27.238316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.306 [2024-12-09 17:54:27.297872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.565 17:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.565 17:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:04.565 17:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1351362 00:05:04.565 17:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:04.565 17:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1351362 /var/tmp/spdk2.sock 00:05:04.565 17:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1351362 ']' 00:05:04.565 17:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:04.565 17:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.565 17:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:04.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:04.565 17:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.565 17:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.823 [2024-12-09 17:54:27.619961] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:05:04.823 [2024-12-09 17:54:27.620051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1351362 ] 00:05:04.823 [2024-12-09 17:54:27.717433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.823 [2024-12-09 17:54:27.829491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.757 17:54:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.757 17:54:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:05.757 17:54:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1351362 00:05:05.757 17:54:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1351362 00:05:05.757 17:54:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:06.323 lslocks: write error 00:05:06.323 17:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1351354 00:05:06.323 17:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1351354 ']' 00:05:06.323 17:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1351354 00:05:06.323 17:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:06.323 17:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.323 17:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1351354 00:05:06.323 17:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.323 17:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.323 17:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1351354' 00:05:06.323 killing process with pid 1351354 00:05:06.323 17:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1351354 00:05:06.323 17:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1351354 00:05:07.257 17:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1351362 00:05:07.257 17:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1351362 ']' 00:05:07.257 17:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1351362 00:05:07.257 17:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:07.257 17:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.257 17:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1351362 00:05:07.257 17:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.257 17:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.257 17:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1351362' 00:05:07.257 killing process with pid 1351362 00:05:07.257 17:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1351362 00:05:07.257 17:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1351362 00:05:07.518 00:05:07.518 real 0m3.318s 00:05:07.518 user 0m3.550s 00:05:07.518 sys 0m1.070s 00:05:07.518 17:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.518 17:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.518 ************************************ 00:05:07.518 END TEST locking_app_on_unlocked_coremask 00:05:07.518 ************************************ 00:05:07.518 17:54:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:07.518 17:54:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.518 17:54:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.518 17:54:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.518 ************************************ 00:05:07.518 START TEST locking_app_on_locked_coremask 00:05:07.518 ************************************ 00:05:07.518 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:07.518 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1351788 00:05:07.518 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.518 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1351788 /var/tmp/spdk.sock 00:05:07.518 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1351788 ']' 00:05:07.518 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.518 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.518 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.518 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.518 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.518 [2024-12-09 17:54:30.541841] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:05:07.518 [2024-12-09 17:54:30.541930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1351788 ] 00:05:07.777 [2024-12-09 17:54:30.607671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.777 [2024-12-09 17:54:30.667231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.040 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.040 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:08.040 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1351804 00:05:08.040 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:08.040 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1351804 /var/tmp/spdk2.sock 00:05:08.040 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:08.040 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1351804 /var/tmp/spdk2.sock 00:05:08.040 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:08.040 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.040 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:08.041 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.041 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1351804 /var/tmp/spdk2.sock 00:05:08.041 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1351804 ']' 00:05:08.041 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:08.041 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.041 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:08.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:08.041 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.041 17:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.041 [2024-12-09 17:54:30.997014] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:05:08.041 [2024-12-09 17:54:30.997103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1351804 ] 00:05:08.299 [2024-12-09 17:54:31.092567] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1351788 has claimed it. 00:05:08.299 [2024-12-09 17:54:31.092634] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:08.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1351804) - No such process 00:05:08.863 ERROR: process (pid: 1351804) is no longer running 00:05:08.863 17:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.863 17:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:08.863 17:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:08.863 17:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:08.863 17:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:08.863 17:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:08.863 17:54:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1351788 00:05:08.863 17:54:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1351788 00:05:08.863 17:54:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.122 lslocks: write error 00:05:09.122 17:54:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1351788 00:05:09.122 17:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1351788 ']' 00:05:09.122 17:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1351788 00:05:09.122 17:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:09.122 17:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.122 17:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1351788 00:05:09.122 17:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.122 17:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.122 17:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1351788' 00:05:09.122 killing process with pid 1351788 00:05:09.122 17:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1351788 00:05:09.122 17:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1351788 00:05:09.688 00:05:09.688 real 0m1.938s 00:05:09.688 user 0m2.142s 00:05:09.688 sys 0m0.621s 00:05:09.688 17:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.688 17:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.688 ************************************ 00:05:09.688 END TEST locking_app_on_locked_coremask 00:05:09.688 ************************************ 00:05:09.688 17:54:32 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:09.688 17:54:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.688 17:54:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.688 17:54:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.688 ************************************ 00:05:09.688 START TEST locking_overlapped_coremask 00:05:09.688 ************************************ 00:05:09.688 17:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:09.688 17:54:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1352086 00:05:09.688 17:54:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:09.688 17:54:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1352086 /var/tmp/spdk.sock 00:05:09.688 17:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1352086 ']' 00:05:09.688 17:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.688 17:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.688 17:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.688 17:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.688 17:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.688 [2024-12-09 17:54:32.529574] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:05:09.688 [2024-12-09 17:54:32.529682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1352086 ] 00:05:09.688 [2024-12-09 17:54:32.597143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:09.688 [2024-12-09 17:54:32.660059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.688 [2024-12-09 17:54:32.660124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.688 [2024-12-09 17:54:32.660128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.947 17:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.947 17:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:09.947 17:54:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1352099 00:05:09.947 17:54:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1352099 /var/tmp/spdk2.sock 00:05:09.947 17:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:09.947 17:54:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:09.947 17:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1352099 /var/tmp/spdk2.sock 00:05:09.947 17:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:09.947 17:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.947 17:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:09.947 17:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.947 17:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1352099 /var/tmp/spdk2.sock 00:05:09.947 17:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1352099 ']' 00:05:09.947 17:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.947 17:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.947 17:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.947 17:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.947 17:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.205 [2024-12-09 17:54:32.998415] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:05:10.205 [2024-12-09 17:54:32.998504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1352099 ] 00:05:10.205 [2024-12-09 17:54:33.104234] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1352086 has claimed it. 00:05:10.205 [2024-12-09 17:54:33.104297] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:10.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1352099) - No such process 00:05:10.772 ERROR: process (pid: 1352099) is no longer running 00:05:10.772 17:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.772 17:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:10.772 17:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:10.772 17:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:10.772 17:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:10.772 17:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:10.772 17:54:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:10.773 17:54:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:10.773 17:54:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:10.773 17:54:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:10.773 17:54:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1352086 00:05:10.773 17:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1352086 ']' 00:05:10.773 17:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1352086 00:05:10.773 17:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:10.773 17:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.773 17:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1352086 00:05:10.773 17:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.773 17:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.773 17:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1352086' 00:05:10.773 killing process with pid 1352086 00:05:10.773 17:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1352086 00:05:10.773 17:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1352086 00:05:11.339 00:05:11.339 real 0m1.695s 00:05:11.339 user 0m4.728s 00:05:11.339 sys 0m0.463s 00:05:11.339 17:54:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.339 17:54:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.339 ************************************ 00:05:11.339 END TEST locking_overlapped_coremask 00:05:11.339 ************************************ 00:05:11.339 17:54:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:11.339 17:54:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.339 17:54:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.339 17:54:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.339 ************************************ 00:05:11.339 START TEST locking_overlapped_coremask_via_rpc 00:05:11.339 ************************************ 00:05:11.339 17:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:11.339 17:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1352266 00:05:11.339 17:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:11.339 17:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1352266 /var/tmp/spdk.sock 00:05:11.339 17:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1352266 ']' 00:05:11.339 17:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.339 17:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.339 17:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.339 17:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.339 17:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.339 [2024-12-09 17:54:34.275721] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:05:11.339 [2024-12-09 17:54:34.275810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1352266 ] 00:05:11.339 [2024-12-09 17:54:34.341078] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.339 [2024-12-09 17:54:34.341109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.597 [2024-12-09 17:54:34.396846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.597 [2024-12-09 17:54:34.396911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.597 [2024-12-09 17:54:34.396915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.855 17:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.855 17:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:11.855 17:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1352390 00:05:11.855 17:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:11.855 17:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1352390 /var/tmp/spdk2.sock 00:05:11.855 17:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1352390 ']' 00:05:11.855 17:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.855 17:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.855 17:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.855 17:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.855 17:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.855 [2024-12-09 17:54:34.718099] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:05:11.855 [2024-12-09 17:54:34.718189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1352390 ] 00:05:11.855 [2024-12-09 17:54:34.822085] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.855 [2024-12-09 17:54:34.822121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.113 [2024-12-09 17:54:34.943318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.113 [2024-12-09 17:54:34.943380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:12.113 [2024-12-09 17:54:34.943382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.679 [2024-12-09 17:54:35.703645] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1352266 has claimed it. 00:05:12.679 request: 00:05:12.679 { 00:05:12.679 "method": "framework_enable_cpumask_locks", 00:05:12.679 "req_id": 1 00:05:12.679 } 00:05:12.679 Got JSON-RPC error response 00:05:12.679 response: 00:05:12.679 { 00:05:12.679 "code": -32603, 00:05:12.679 "message": "Failed to claim CPU core: 2" 00:05:12.679 } 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1352266 /var/tmp/spdk.sock 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1352266 ']' 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.679 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.269 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.269 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:13.269 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1352390 /var/tmp/spdk2.sock 00:05:13.269 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1352390 ']' 00:05:13.269 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.269 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.269 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.269 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.269 17:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.269 17:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.269 17:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:13.269 17:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:13.269 17:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:13.269 17:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:13.269 17:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:13.269 00:05:13.269 real 0m2.032s 00:05:13.269 user 0m1.093s 00:05:13.269 sys 0m0.185s 00:05:13.269 17:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.269 17:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.269 ************************************ 00:05:13.269 END TEST locking_overlapped_coremask_via_rpc 00:05:13.269 ************************************ 00:05:13.269 17:54:36 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:13.269 17:54:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1352266 ]] 00:05:13.269 17:54:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1352266 00:05:13.269 17:54:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1352266 ']' 00:05:13.269 17:54:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1352266 00:05:13.269 17:54:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:13.269 17:54:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.269 17:54:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1352266 00:05:13.527 17:54:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.527 17:54:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.527 17:54:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1352266' 00:05:13.527 killing process with pid 1352266 00:05:13.527 17:54:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1352266 00:05:13.527 17:54:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1352266 00:05:13.784 17:54:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1352390 ]] 00:05:13.784 17:54:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1352390 00:05:13.784 17:54:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1352390 ']' 00:05:13.784 17:54:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1352390 00:05:13.784 17:54:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:13.784 17:54:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.784 17:54:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1352390 00:05:13.784 17:54:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:13.784 17:54:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:13.784 17:54:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1352390' 00:05:13.784 killing process with pid 1352390 00:05:13.784 17:54:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1352390 00:05:13.784 17:54:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1352390 00:05:14.350 17:54:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:14.350 17:54:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:14.350 17:54:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1352266 ]] 00:05:14.350 17:54:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1352266 00:05:14.350 17:54:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1352266 ']' 00:05:14.350 17:54:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1352266 00:05:14.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1352266) - No such process 00:05:14.350 17:54:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1352266 is not found' 00:05:14.350 Process with pid 1352266 is not found 00:05:14.350 17:54:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1352390 ]] 00:05:14.350 17:54:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1352390 00:05:14.350 17:54:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1352390 ']' 00:05:14.350 17:54:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1352390 00:05:14.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1352390) - No such process 00:05:14.350 17:54:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1352390 is not found' 00:05:14.350 Process with pid 1352390 is not found 00:05:14.350 17:54:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:14.350 00:05:14.350 real 0m16.065s 00:05:14.350 user 0m29.052s 00:05:14.350 sys 0m5.330s 00:05:14.350 17:54:37 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.350 17:54:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.350 ************************************ 00:05:14.350 END TEST cpu_locks 00:05:14.350 ************************************ 00:05:14.350 00:05:14.350 real 0m40.829s 00:05:14.350 user 1m19.929s 00:05:14.350 sys 0m9.417s 00:05:14.350 17:54:37 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.350 17:54:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.350 ************************************ 00:05:14.350 END TEST event 00:05:14.350 ************************************ 00:05:14.350 17:54:37 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:14.350 17:54:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.350 17:54:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.350 17:54:37 -- common/autotest_common.sh@10 -- # set +x 00:05:14.350 ************************************ 00:05:14.350 START TEST thread 00:05:14.350 ************************************ 00:05:14.350 17:54:37 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:14.350 * Looking for test storage... 00:05:14.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:14.350 17:54:37 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:14.350 17:54:37 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:14.350 17:54:37 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:14.608 17:54:37 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:14.608 17:54:37 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.608 17:54:37 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.608 17:54:37 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.608 17:54:37 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.608 17:54:37 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.608 17:54:37 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.608 17:54:37 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.608 17:54:37 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.608 17:54:37 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.608 17:54:37 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.608 17:54:37 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.608 17:54:37 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:14.608 17:54:37 thread -- scripts/common.sh@345 -- # : 1 00:05:14.608 17:54:37 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.608 17:54:37 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.608 17:54:37 thread -- scripts/common.sh@365 -- # decimal 1 00:05:14.608 17:54:37 thread -- scripts/common.sh@353 -- # local d=1 00:05:14.608 17:54:37 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.608 17:54:37 thread -- scripts/common.sh@355 -- # echo 1 00:05:14.608 17:54:37 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.608 17:54:37 thread -- scripts/common.sh@366 -- # decimal 2 00:05:14.608 17:54:37 thread -- scripts/common.sh@353 -- # local d=2 00:05:14.608 17:54:37 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.608 17:54:37 thread -- scripts/common.sh@355 -- # echo 2 00:05:14.608 17:54:37 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.608 17:54:37 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.608 17:54:37 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.608 17:54:37 thread -- scripts/common.sh@368 -- # return 0 00:05:14.608 17:54:37 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.608 17:54:37 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:14.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.608 --rc genhtml_branch_coverage=1 00:05:14.608 --rc genhtml_function_coverage=1 00:05:14.608 --rc genhtml_legend=1 00:05:14.608 --rc geninfo_all_blocks=1 00:05:14.608 --rc geninfo_unexecuted_blocks=1 00:05:14.608 00:05:14.608 ' 00:05:14.608 17:54:37 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:14.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.608 --rc genhtml_branch_coverage=1 00:05:14.608 --rc genhtml_function_coverage=1 00:05:14.608 --rc genhtml_legend=1 00:05:14.608 --rc geninfo_all_blocks=1 00:05:14.608 --rc geninfo_unexecuted_blocks=1 00:05:14.608 00:05:14.608 ' 00:05:14.608 17:54:37 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:14.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.608 --rc genhtml_branch_coverage=1 00:05:14.608 --rc genhtml_function_coverage=1 00:05:14.608 --rc genhtml_legend=1 00:05:14.608 --rc geninfo_all_blocks=1 00:05:14.608 --rc geninfo_unexecuted_blocks=1 00:05:14.608 00:05:14.608 ' 00:05:14.608 17:54:37 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:14.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.608 --rc genhtml_branch_coverage=1 00:05:14.608 --rc genhtml_function_coverage=1 00:05:14.608 --rc genhtml_legend=1 00:05:14.608 --rc geninfo_all_blocks=1 00:05:14.608 --rc geninfo_unexecuted_blocks=1 00:05:14.608 00:05:14.608 ' 00:05:14.608 17:54:37 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:14.608 17:54:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:14.608 17:54:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.608 17:54:37 thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.608 ************************************ 00:05:14.608 START TEST thread_poller_perf 00:05:14.608 ************************************ 00:05:14.608 17:54:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:14.608 [2024-12-09 17:54:37.482049] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:05:14.608 [2024-12-09 17:54:37.482116] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1352772 ] 00:05:14.608 [2024-12-09 17:54:37.557127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.608 [2024-12-09 17:54:37.617211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.608 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:15.981 [2024-12-09T16:54:39.022Z] ====================================== 00:05:15.981 [2024-12-09T16:54:39.022Z] busy:2711889543 (cyc) 00:05:15.981 [2024-12-09T16:54:39.022Z] total_run_count: 367000 00:05:15.981 [2024-12-09T16:54:39.022Z] tsc_hz: 2700000000 (cyc) 00:05:15.981 [2024-12-09T16:54:39.022Z] ====================================== 00:05:15.981 [2024-12-09T16:54:39.022Z] poller_cost: 7389 (cyc), 2736 (nsec) 00:05:15.981 00:05:15.981 real 0m1.216s 00:05:15.981 user 0m1.137s 00:05:15.981 sys 0m0.068s 00:05:15.981 17:54:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.981 17:54:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:15.981 ************************************ 00:05:15.981 END TEST thread_poller_perf 00:05:15.981 ************************************ 00:05:15.981 17:54:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:15.981 17:54:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:15.981 17:54:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.981 17:54:38 thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.981 ************************************ 00:05:15.981 START TEST thread_poller_perf 00:05:15.981 ************************************ 00:05:15.981 17:54:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:15.981 [2024-12-09 17:54:38.753403] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:05:15.981 [2024-12-09 17:54:38.753472] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1352928 ] 00:05:15.981 [2024-12-09 17:54:38.821715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.981 [2024-12-09 17:54:38.875597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.981 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:16.915 [2024-12-09T16:54:39.956Z] ====================================== 00:05:16.915 [2024-12-09T16:54:39.956Z] busy:2701969521 (cyc) 00:05:16.915 [2024-12-09T16:54:39.956Z] total_run_count: 4465000 00:05:16.915 [2024-12-09T16:54:39.956Z] tsc_hz: 2700000000 (cyc) 00:05:16.915 [2024-12-09T16:54:39.956Z] ====================================== 00:05:16.915 [2024-12-09T16:54:39.956Z] poller_cost: 605 (cyc), 224 (nsec) 00:05:16.915 00:05:16.915 real 0m1.202s 00:05:16.915 user 0m1.129s 00:05:16.915 sys 0m0.068s 00:05:16.915 17:54:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.915 17:54:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.915 ************************************ 00:05:16.915 END TEST thread_poller_perf 00:05:16.915 ************************************ 00:05:17.174 17:54:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:17.174 00:05:17.174 real 0m2.665s 00:05:17.174 user 0m2.405s 00:05:17.174 sys 0m0.258s 00:05:17.174 17:54:39 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.174 17:54:39 thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.174 ************************************ 00:05:17.174 END TEST thread 00:05:17.174 ************************************ 00:05:17.174 17:54:39 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:17.174 17:54:39 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:17.174 17:54:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.174 17:54:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.174 17:54:39 -- common/autotest_common.sh@10 -- # set +x 00:05:17.174 ************************************ 00:05:17.174 START TEST app_cmdline 00:05:17.174 ************************************ 00:05:17.174 17:54:40 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:17.174 * Looking for test storage... 00:05:17.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:17.174 17:54:40 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:17.174 17:54:40 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:17.174 17:54:40 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:17.174 17:54:40 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.174 17:54:40 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:17.174 17:54:40 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.174 17:54:40 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:17.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.174 --rc genhtml_branch_coverage=1 00:05:17.174 --rc genhtml_function_coverage=1 00:05:17.174 --rc genhtml_legend=1 00:05:17.174 --rc geninfo_all_blocks=1 00:05:17.174 --rc geninfo_unexecuted_blocks=1 00:05:17.174 00:05:17.174 ' 00:05:17.174 17:54:40 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:17.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.174 --rc genhtml_branch_coverage=1 00:05:17.174 --rc genhtml_function_coverage=1 00:05:17.174 --rc genhtml_legend=1 00:05:17.174 --rc geninfo_all_blocks=1 00:05:17.174 --rc geninfo_unexecuted_blocks=1 00:05:17.174 00:05:17.174 ' 00:05:17.174 17:54:40 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:17.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.174 --rc genhtml_branch_coverage=1 00:05:17.174 --rc genhtml_function_coverage=1 00:05:17.174 --rc genhtml_legend=1 00:05:17.174 --rc geninfo_all_blocks=1 00:05:17.174 --rc geninfo_unexecuted_blocks=1 00:05:17.174 00:05:17.175 ' 00:05:17.175 17:54:40 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:17.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.175 --rc genhtml_branch_coverage=1 00:05:17.175 --rc genhtml_function_coverage=1 00:05:17.175 --rc genhtml_legend=1 00:05:17.175 --rc geninfo_all_blocks=1 00:05:17.175 --rc geninfo_unexecuted_blocks=1 00:05:17.175 00:05:17.175 ' 00:05:17.175 17:54:40 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:17.175 17:54:40 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1353149 00:05:17.175 17:54:40 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:17.175 17:54:40 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1353149 00:05:17.175 17:54:40 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1353149 ']' 00:05:17.175 17:54:40 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.175 17:54:40 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.175 17:54:40 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.175 17:54:40 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.175 17:54:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:17.433 [2024-12-09 17:54:40.218976] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:05:17.433 [2024-12-09 17:54:40.219058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1353149 ] 00:05:17.433 [2024-12-09 17:54:40.292257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.433 [2024-12-09 17:54:40.350789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.691 17:54:40 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.691 17:54:40 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:17.691 17:54:40 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:17.948 { 00:05:17.948 "version": "SPDK v25.01-pre git sha1 9237e57ed", 00:05:17.948 "fields": { 00:05:17.948 "major": 25, 00:05:17.948 "minor": 1, 00:05:17.948 "patch": 0, 00:05:17.948 "suffix": "-pre", 00:05:17.948 "commit": "9237e57ed" 00:05:17.948 } 00:05:17.948 } 00:05:17.948 17:54:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:17.948 17:54:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:17.948 17:54:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:17.948 17:54:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:17.948 17:54:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:17.948 17:54:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:17.948 17:54:40 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.948 17:54:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:17.948 17:54:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:17.948 17:54:40 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.948 17:54:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:17.948 17:54:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:17.948 17:54:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:17.948 17:54:40 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:17.948 17:54:40 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:17.948 17:54:40 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:17.948 17:54:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.948 17:54:40 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:17.948 17:54:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.948 17:54:40 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:17.948 17:54:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.948 17:54:40 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:17.948 17:54:40 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:17.948 17:54:40 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:18.206 request: 00:05:18.206 { 00:05:18.206 "method": "env_dpdk_get_mem_stats", 00:05:18.206 "req_id": 1 00:05:18.206 } 00:05:18.206 Got JSON-RPC error response 00:05:18.206 response: 00:05:18.206 { 00:05:18.206 "code": -32601, 00:05:18.206 "message": "Method not found" 00:05:18.206 } 00:05:18.206 17:54:41 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:18.206 17:54:41 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:18.206 17:54:41 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:18.206 17:54:41 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:18.206 17:54:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1353149 00:05:18.206 17:54:41 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1353149 ']' 00:05:18.206 17:54:41 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1353149 00:05:18.206 17:54:41 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:18.206 17:54:41 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.206 17:54:41 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1353149 00:05:18.206 17:54:41 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.206 17:54:41 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.206 17:54:41 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1353149' 00:05:18.206 killing process with pid 1353149 00:05:18.206 17:54:41 app_cmdline -- common/autotest_common.sh@973 -- # kill 1353149 00:05:18.206 17:54:41 app_cmdline -- common/autotest_common.sh@978 -- # wait 1353149 00:05:18.772 00:05:18.772 real 0m1.635s 00:05:18.772 user 0m2.019s 00:05:18.772 sys 0m0.483s 00:05:18.772 17:54:41 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.772 17:54:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:18.772 ************************************ 00:05:18.772 END TEST app_cmdline 00:05:18.772 ************************************ 00:05:18.772 17:54:41 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:18.772 17:54:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.772 17:54:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.772 17:54:41 -- common/autotest_common.sh@10 -- # set +x 00:05:18.772 ************************************ 00:05:18.772 START TEST version 00:05:18.772 ************************************ 00:05:18.772 17:54:41 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:18.772 * Looking for test storage... 00:05:18.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:18.772 17:54:41 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:18.772 17:54:41 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:18.772 17:54:41 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:19.031 17:54:41 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:19.031 17:54:41 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.031 17:54:41 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.031 17:54:41 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.031 17:54:41 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.031 17:54:41 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.031 17:54:41 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.031 17:54:41 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.031 17:54:41 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.031 17:54:41 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.031 17:54:41 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.031 17:54:41 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.031 17:54:41 version -- scripts/common.sh@344 -- # case "$op" in 00:05:19.031 17:54:41 version -- scripts/common.sh@345 -- # : 1 00:05:19.031 17:54:41 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.031 17:54:41 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.031 17:54:41 version -- scripts/common.sh@365 -- # decimal 1 00:05:19.031 17:54:41 version -- scripts/common.sh@353 -- # local d=1 00:05:19.031 17:54:41 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.031 17:54:41 version -- scripts/common.sh@355 -- # echo 1 00:05:19.031 17:54:41 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.031 17:54:41 version -- scripts/common.sh@366 -- # decimal 2 00:05:19.031 17:54:41 version -- scripts/common.sh@353 -- # local d=2 00:05:19.031 17:54:41 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.031 17:54:41 version -- scripts/common.sh@355 -- # echo 2 00:05:19.031 17:54:41 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.031 17:54:41 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.031 17:54:41 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.031 17:54:41 version -- scripts/common.sh@368 -- # return 0 00:05:19.031 17:54:41 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.031 17:54:41 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:19.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.031 --rc genhtml_branch_coverage=1 00:05:19.031 --rc genhtml_function_coverage=1 00:05:19.031 --rc genhtml_legend=1 00:05:19.031 --rc geninfo_all_blocks=1 00:05:19.031 --rc geninfo_unexecuted_blocks=1 00:05:19.031 00:05:19.031 ' 00:05:19.031 17:54:41 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:19.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.031 --rc genhtml_branch_coverage=1 00:05:19.031 --rc genhtml_function_coverage=1 00:05:19.031 --rc genhtml_legend=1 00:05:19.031 --rc geninfo_all_blocks=1 00:05:19.031 --rc geninfo_unexecuted_blocks=1 00:05:19.031 00:05:19.031 ' 00:05:19.031 17:54:41 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:19.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.031 --rc genhtml_branch_coverage=1 00:05:19.031 --rc genhtml_function_coverage=1 00:05:19.031 --rc genhtml_legend=1 00:05:19.031 --rc geninfo_all_blocks=1 00:05:19.031 --rc geninfo_unexecuted_blocks=1 00:05:19.031 00:05:19.031 ' 00:05:19.031 17:54:41 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:19.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.031 --rc genhtml_branch_coverage=1 00:05:19.031 --rc genhtml_function_coverage=1 00:05:19.031 --rc genhtml_legend=1 00:05:19.031 --rc geninfo_all_blocks=1 00:05:19.031 --rc geninfo_unexecuted_blocks=1 00:05:19.031 00:05:19.031 ' 00:05:19.031 17:54:41 version -- app/version.sh@17 -- # get_header_version major 00:05:19.031 17:54:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:19.031 17:54:41 version -- app/version.sh@14 -- # cut -f2 00:05:19.031 17:54:41 version -- app/version.sh@14 -- # tr -d '"' 00:05:19.031 17:54:41 version -- app/version.sh@17 -- # major=25 00:05:19.031 17:54:41 version -- app/version.sh@18 -- # get_header_version minor 00:05:19.031 17:54:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:19.031 17:54:41 version -- app/version.sh@14 -- # cut -f2 00:05:19.031 17:54:41 version -- app/version.sh@14 -- # tr -d '"' 00:05:19.031 17:54:41 version -- app/version.sh@18 -- # minor=1 00:05:19.031 17:54:41 version -- app/version.sh@19 -- # get_header_version patch 00:05:19.031 17:54:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:19.031 17:54:41 version -- app/version.sh@14 -- # cut -f2 00:05:19.031 17:54:41 version -- app/version.sh@14 -- # tr -d '"' 00:05:19.031 17:54:41 version -- app/version.sh@19 -- # patch=0 00:05:19.031 17:54:41 version -- app/version.sh@20 -- # get_header_version suffix 00:05:19.031 17:54:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:19.031 17:54:41 version -- app/version.sh@14 -- # cut -f2 00:05:19.031 17:54:41 version -- app/version.sh@14 -- # tr -d '"' 00:05:19.031 17:54:41 version -- app/version.sh@20 -- # suffix=-pre 00:05:19.031 17:54:41 version -- app/version.sh@22 -- # version=25.1 00:05:19.031 17:54:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:19.031 17:54:41 version -- app/version.sh@28 -- # version=25.1rc0 00:05:19.032 17:54:41 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:19.032 17:54:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:19.032 17:54:41 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:19.032 17:54:41 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:19.032 00:05:19.032 real 0m0.197s 00:05:19.032 user 0m0.141s 00:05:19.032 sys 0m0.082s 00:05:19.032 17:54:41 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.032 17:54:41 version -- common/autotest_common.sh@10 -- # set +x 00:05:19.032 ************************************ 00:05:19.032 END TEST version 00:05:19.032 ************************************ 00:05:19.032 17:54:41 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:19.032 17:54:41 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:19.032 17:54:41 -- spdk/autotest.sh@194 -- # uname -s 00:05:19.032 17:54:41 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:19.032 17:54:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:19.032 17:54:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:19.032 17:54:41 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:19.032 17:54:41 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:19.032 17:54:41 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:19.032 17:54:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.032 17:54:41 -- common/autotest_common.sh@10 -- # set +x 00:05:19.032 17:54:41 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:19.032 17:54:41 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:19.032 17:54:41 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:19.032 17:54:41 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:19.032 17:54:41 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:19.032 17:54:41 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:19.032 17:54:41 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:19.032 17:54:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:19.032 17:54:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.032 17:54:41 -- common/autotest_common.sh@10 -- # set +x 00:05:19.032 ************************************ 00:05:19.032 START TEST nvmf_tcp 00:05:19.032 ************************************ 00:05:19.032 17:54:41 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:19.032 * Looking for test storage... 00:05:19.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:19.032 17:54:42 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:19.032 17:54:42 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:19.032 17:54:42 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:19.291 17:54:42 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.291 17:54:42 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:19.291 17:54:42 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.291 17:54:42 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:19.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.291 --rc genhtml_branch_coverage=1 00:05:19.291 --rc genhtml_function_coverage=1 00:05:19.291 --rc genhtml_legend=1 00:05:19.291 --rc geninfo_all_blocks=1 00:05:19.291 --rc geninfo_unexecuted_blocks=1 00:05:19.291 00:05:19.291 ' 00:05:19.291 17:54:42 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:19.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.291 --rc genhtml_branch_coverage=1 00:05:19.291 --rc genhtml_function_coverage=1 00:05:19.291 --rc genhtml_legend=1 00:05:19.291 --rc geninfo_all_blocks=1 00:05:19.291 --rc geninfo_unexecuted_blocks=1 00:05:19.291 00:05:19.291 ' 00:05:19.291 17:54:42 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:19.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.291 --rc genhtml_branch_coverage=1 00:05:19.291 --rc genhtml_function_coverage=1 00:05:19.291 --rc genhtml_legend=1 00:05:19.291 --rc geninfo_all_blocks=1 00:05:19.291 --rc geninfo_unexecuted_blocks=1 00:05:19.291 00:05:19.291 ' 00:05:19.291 17:54:42 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:19.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.291 --rc genhtml_branch_coverage=1 00:05:19.291 --rc genhtml_function_coverage=1 00:05:19.291 --rc genhtml_legend=1 00:05:19.291 --rc geninfo_all_blocks=1 00:05:19.291 --rc geninfo_unexecuted_blocks=1 00:05:19.291 00:05:19.291 ' 00:05:19.291 17:54:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:19.291 17:54:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:19.291 17:54:42 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:19.291 17:54:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:19.291 17:54:42 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.291 17:54:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:19.291 ************************************ 00:05:19.291 START TEST nvmf_target_core 00:05:19.291 ************************************ 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:19.291 * Looking for test storage... 00:05:19.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:19.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.291 --rc genhtml_branch_coverage=1 00:05:19.291 --rc genhtml_function_coverage=1 00:05:19.291 --rc genhtml_legend=1 00:05:19.291 --rc geninfo_all_blocks=1 00:05:19.291 --rc geninfo_unexecuted_blocks=1 00:05:19.291 00:05:19.291 ' 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:19.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.291 --rc genhtml_branch_coverage=1 00:05:19.291 --rc genhtml_function_coverage=1 00:05:19.291 --rc genhtml_legend=1 00:05:19.291 --rc geninfo_all_blocks=1 00:05:19.291 --rc geninfo_unexecuted_blocks=1 00:05:19.291 00:05:19.291 ' 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:19.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.291 --rc genhtml_branch_coverage=1 00:05:19.291 --rc genhtml_function_coverage=1 00:05:19.291 --rc genhtml_legend=1 00:05:19.291 --rc geninfo_all_blocks=1 00:05:19.291 --rc geninfo_unexecuted_blocks=1 00:05:19.291 00:05:19.291 ' 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:19.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.291 --rc genhtml_branch_coverage=1 00:05:19.291 --rc genhtml_function_coverage=1 00:05:19.291 --rc genhtml_legend=1 00:05:19.291 --rc geninfo_all_blocks=1 00:05:19.291 --rc geninfo_unexecuted_blocks=1 00:05:19.291 00:05:19.291 ' 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.291 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:19.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.292 17:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:19.551 ************************************ 00:05:19.551 START TEST nvmf_abort 00:05:19.551 ************************************ 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:19.551 * Looking for test storage... 00:05:19.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:19.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.551 --rc genhtml_branch_coverage=1 00:05:19.551 --rc genhtml_function_coverage=1 00:05:19.551 --rc genhtml_legend=1 00:05:19.551 --rc geninfo_all_blocks=1 00:05:19.551 --rc geninfo_unexecuted_blocks=1 00:05:19.551 00:05:19.551 ' 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:19.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.551 --rc genhtml_branch_coverage=1 00:05:19.551 --rc genhtml_function_coverage=1 00:05:19.551 --rc genhtml_legend=1 00:05:19.551 --rc geninfo_all_blocks=1 00:05:19.551 --rc geninfo_unexecuted_blocks=1 00:05:19.551 00:05:19.551 ' 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:19.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.551 --rc genhtml_branch_coverage=1 00:05:19.551 --rc genhtml_function_coverage=1 00:05:19.551 --rc genhtml_legend=1 00:05:19.551 --rc geninfo_all_blocks=1 00:05:19.551 --rc geninfo_unexecuted_blocks=1 00:05:19.551 00:05:19.551 ' 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:19.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.551 --rc genhtml_branch_coverage=1 00:05:19.551 --rc genhtml_function_coverage=1 00:05:19.551 --rc genhtml_legend=1 00:05:19.551 --rc geninfo_all_blocks=1 00:05:19.551 --rc geninfo_unexecuted_blocks=1 00:05:19.551 00:05:19.551 ' 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.551 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:19.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:19.552 17:54:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:22.084 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:22.084 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:22.084 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:22.084 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:22.084 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:22.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:22.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:05:22.085 00:05:22.085 --- 10.0.0.2 ping statistics --- 00:05:22.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:22.085 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:22.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:22.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:05:22.085 00:05:22.085 --- 10.0.0.1 ping statistics --- 00:05:22.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:22.085 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1355342 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1355342 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1355342 ']' 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.085 17:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:22.085 [2024-12-09 17:54:44.912111] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:05:22.085 [2024-12-09 17:54:44.912209] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:22.085 [2024-12-09 17:54:44.987381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:22.085 [2024-12-09 17:54:45.048501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:22.085 [2024-12-09 17:54:45.048563] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:22.085 [2024-12-09 17:54:45.048593] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:22.085 [2024-12-09 17:54:45.048605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:22.085 [2024-12-09 17:54:45.048615] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:22.085 [2024-12-09 17:54:45.050287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.085 [2024-12-09 17:54:45.050313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.085 [2024-12-09 17:54:45.050317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.343 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.343 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:22.343 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:22.343 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:22.343 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:22.343 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:22.343 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:22.343 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.343 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:22.343 [2024-12-09 17:54:45.204480] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:22.343 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.343 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:22.343 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.343 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:22.343 Malloc0 00:05:22.343 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.343 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:22.344 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.344 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:22.344 Delay0 00:05:22.344 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.344 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:22.344 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.344 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:22.344 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.344 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:22.344 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.344 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:22.344 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.344 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:22.344 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.344 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:22.344 [2024-12-09 17:54:45.271629] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:22.344 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.344 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:22.344 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.344 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:22.344 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.344 17:54:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:22.344 [2024-12-09 17:54:45.346161] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:24.891 Initializing NVMe Controllers 00:05:24.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:24.891 controller IO queue size 128 less than required 00:05:24.891 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:24.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:24.891 Initialization complete. Launching workers. 00:05:24.891 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29104 00:05:24.891 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29165, failed to submit 62 00:05:24.891 success 29108, unsuccessful 57, failed 0 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:24.891 rmmod nvme_tcp 00:05:24.891 rmmod nvme_fabrics 00:05:24.891 rmmod nvme_keyring 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1355342 ']' 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1355342 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1355342 ']' 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1355342 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1355342 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1355342' 00:05:24.891 killing process with pid 1355342 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1355342 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1355342 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:24.891 17:54:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:26.800 17:54:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:26.800 00:05:26.800 real 0m7.489s 00:05:26.800 user 0m10.587s 00:05:26.800 sys 0m2.655s 00:05:26.800 17:54:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.800 17:54:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.800 ************************************ 00:05:26.800 END TEST nvmf_abort 00:05:26.800 ************************************ 00:05:27.059 17:54:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:27.059 17:54:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:27.059 17:54:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.059 17:54:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:27.059 ************************************ 00:05:27.059 START TEST nvmf_ns_hotplug_stress 00:05:27.059 ************************************ 00:05:27.059 17:54:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:27.059 * Looking for test storage... 00:05:27.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:27.059 17:54:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:27.059 17:54:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:27.059 17:54:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:27.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.059 --rc genhtml_branch_coverage=1 00:05:27.059 --rc genhtml_function_coverage=1 00:05:27.059 --rc genhtml_legend=1 00:05:27.059 --rc geninfo_all_blocks=1 00:05:27.059 --rc geninfo_unexecuted_blocks=1 00:05:27.059 00:05:27.059 ' 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:27.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.059 --rc genhtml_branch_coverage=1 00:05:27.059 --rc genhtml_function_coverage=1 00:05:27.059 --rc genhtml_legend=1 00:05:27.059 --rc geninfo_all_blocks=1 00:05:27.059 --rc geninfo_unexecuted_blocks=1 00:05:27.059 00:05:27.059 ' 00:05:27.059 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:27.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.060 --rc genhtml_branch_coverage=1 00:05:27.060 --rc genhtml_function_coverage=1 00:05:27.060 --rc genhtml_legend=1 00:05:27.060 --rc geninfo_all_blocks=1 00:05:27.060 --rc geninfo_unexecuted_blocks=1 00:05:27.060 00:05:27.060 ' 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:27.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.060 --rc genhtml_branch_coverage=1 00:05:27.060 --rc genhtml_function_coverage=1 00:05:27.060 --rc genhtml_legend=1 00:05:27.060 --rc geninfo_all_blocks=1 00:05:27.060 --rc geninfo_unexecuted_blocks=1 00:05:27.060 00:05:27.060 ' 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:27.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:27.060 17:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:29.660 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:29.661 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:29.661 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:29.661 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:29.661 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:29.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:29.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:05:29.661 00:05:29.661 --- 10.0.0.2 ping statistics --- 00:05:29.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:29.661 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:05:29.661 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:29.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:29.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:05:29.662 00:05:29.662 --- 10.0.0.1 ping statistics --- 00:05:29.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:29.662 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:05:29.662 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:29.662 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:29.662 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:29.662 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:29.662 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:29.662 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:29.662 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:29.662 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:29.662 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:29.662 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:29.662 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:29.662 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:29.662 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:29.662 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1357614 00:05:29.662 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:29.662 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1357614 00:05:29.662 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1357614 ']' 00:05:29.662 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.662 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.662 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.662 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.662 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:29.662 [2024-12-09 17:54:52.539388] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:05:29.662 [2024-12-09 17:54:52.539460] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:29.662 [2024-12-09 17:54:52.612454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:29.662 [2024-12-09 17:54:52.666034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:29.662 [2024-12-09 17:54:52.666090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:29.662 [2024-12-09 17:54:52.666118] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:29.662 [2024-12-09 17:54:52.666137] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:29.662 [2024-12-09 17:54:52.666146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:29.662 [2024-12-09 17:54:52.667728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.662 [2024-12-09 17:54:52.667756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.662 [2024-12-09 17:54:52.667760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.921 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.921 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:29.921 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:29.921 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:29.921 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:29.921 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:29.921 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:29.921 17:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:30.179 [2024-12-09 17:54:53.054421] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:30.179 17:54:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:30.437 17:54:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:30.695 [2024-12-09 17:54:53.593227] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:30.695 17:54:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:30.953 17:54:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:31.212 Malloc0 00:05:31.212 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:31.470 Delay0 00:05:31.470 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.728 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:31.986 NULL1 00:05:31.986 17:54:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:32.243 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1358008 00:05:32.243 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:32.243 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:32.243 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.501 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.759 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:32.759 17:54:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:33.017 true 00:05:33.275 17:54:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:33.275 17:54:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.533 17:54:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.791 17:54:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:33.791 17:54:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:34.049 true 00:05:34.049 17:54:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:34.049 17:54:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.307 17:54:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.565 17:54:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:34.565 17:54:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:34.823 true 00:05:34.823 17:54:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:34.823 17:54:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.756 Read completed with error (sct=0, sc=11) 00:05:35.756 17:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.014 17:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:36.014 17:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:36.272 true 00:05:36.272 17:54:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:36.272 17:54:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.530 17:54:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.789 17:54:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:36.789 17:54:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:37.047 true 00:05:37.047 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:37.047 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.612 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.612 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:37.612 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:37.870 true 00:05:37.870 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:37.870 17:55:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.805 17:55:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.063 17:55:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:39.063 17:55:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:39.320 true 00:05:39.320 17:55:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:39.320 17:55:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.578 17:55:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.144 17:55:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:40.144 17:55:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:40.144 true 00:05:40.144 17:55:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:40.144 17:55:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.078 17:55:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.337 17:55:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:41.337 17:55:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:41.596 true 00:05:41.596 17:55:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:41.596 17:55:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.854 17:55:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.112 17:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:42.112 17:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:42.369 true 00:05:42.369 17:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:42.369 17:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.627 17:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.884 17:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:42.885 17:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:43.142 true 00:05:43.143 17:55:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:43.143 17:55:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.077 17:55:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.335 17:55:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:44.335 17:55:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:44.593 true 00:05:44.593 17:55:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:44.593 17:55:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.851 17:55:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.109 17:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:45.109 17:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:45.367 true 00:05:45.367 17:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:45.367 17:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.301 17:55:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.559 17:55:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:46.559 17:55:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:46.817 true 00:05:46.817 17:55:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:46.817 17:55:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.085 17:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.343 17:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:47.343 17:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:47.600 true 00:05:47.600 17:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:47.600 17:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.858 17:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.116 17:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:48.116 17:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:48.375 true 00:05:48.375 17:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:48.375 17:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.308 17:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.566 17:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:49.566 17:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:49.824 true 00:05:50.082 17:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:50.082 17:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.339 17:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.597 17:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:50.597 17:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:50.855 true 00:05:50.855 17:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:50.855 17:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.113 17:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.371 17:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:51.371 17:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:51.629 true 00:05:51.629 17:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:51.629 17:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.586 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.844 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:52.844 17:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:53.102 true 00:05:53.102 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:53.102 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.360 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.618 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:53.618 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:53.875 true 00:05:53.875 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:53.876 17:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.134 17:55:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.699 17:55:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:54.699 17:55:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:54.699 true 00:05:54.699 17:55:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:54.699 17:55:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.071 17:55:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.071 17:55:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:56.071 17:55:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:56.328 true 00:05:56.328 17:55:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:56.328 17:55:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.586 17:55:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.843 17:55:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:56.843 17:55:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:57.100 true 00:05:57.101 17:55:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:57.101 17:55:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.358 17:55:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.616 17:55:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:57.616 17:55:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:57.875 true 00:05:57.875 17:55:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:57.875 17:55:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.250 17:55:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.250 17:55:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:59.250 17:55:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:59.508 true 00:05:59.508 17:55:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:05:59.508 17:55:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.766 17:55:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.024 17:55:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:00.024 17:55:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:00.282 true 00:06:00.282 17:55:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:06:00.282 17:55:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.217 17:55:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.475 17:55:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:01.475 17:55:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:01.733 true 00:06:01.733 17:55:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:06:01.733 17:55:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.991 17:55:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.248 17:55:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:02.248 17:55:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:02.505 true 00:06:02.505 17:55:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:06:02.505 17:55:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.438 17:55:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.438 Initializing NVMe Controllers 00:06:03.438 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:03.438 Controller IO queue size 128, less than required. 00:06:03.438 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:03.438 Controller IO queue size 128, less than required. 00:06:03.438 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:03.438 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:03.438 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:03.438 Initialization complete. Launching workers. 00:06:03.438 ======================================================== 00:06:03.438 Latency(us) 00:06:03.438 Device Information : IOPS MiB/s Average min max 00:06:03.438 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 381.33 0.19 134924.18 2965.28 1022525.34 00:06:03.438 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8108.86 3.96 15739.36 3019.98 455171.12 00:06:03.438 ======================================================== 00:06:03.438 Total : 8490.19 4.15 21092.41 2965.28 1022525.34 00:06:03.438 00:06:03.438 17:55:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:03.438 17:55:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:03.696 true 00:06:03.696 17:55:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1358008 00:06:03.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1358008) - No such process 00:06:03.696 17:55:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1358008 00:06:03.696 17:55:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.954 17:55:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.212 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:04.212 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:04.212 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:04.212 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:04.212 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:04.470 null0 00:06:04.728 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:04.728 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:04.728 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:04.986 null1 00:06:04.986 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:04.986 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:04.986 17:55:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:05.244 null2 00:06:05.244 17:55:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:05.244 17:55:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:05.244 17:55:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:05.502 null3 00:06:05.502 17:55:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:05.502 17:55:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:05.502 17:55:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:05.760 null4 00:06:05.760 17:55:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:05.760 17:55:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:05.760 17:55:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:06.018 null5 00:06:06.018 17:55:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:06.018 17:55:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:06.018 17:55:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:06.276 null6 00:06:06.276 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:06.276 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:06.276 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:06.535 null7 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1362197 1362198 1362200 1362202 1362205 1362210 1362212 1362214 00:06:06.535 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.536 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:06.794 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:06.794 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:06.794 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:06.794 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:06.794 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.794 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:06.794 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:06.794 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:07.053 17:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.053 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:07.312 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:07.312 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:07.312 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.312 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:07.312 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:07.312 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:07.312 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:07.312 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:07.891 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.891 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:07.892 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:08.150 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:08.150 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:08.150 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:08.150 17:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.409 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:08.668 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:08.668 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.668 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:08.668 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:08.668 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:08.668 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:08.668 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:08.668 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.927 17:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:09.186 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:09.186 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:09.186 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.186 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:09.186 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:09.186 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:09.186 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:09.186 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.445 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:09.704 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:09.704 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.704 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:09.704 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:09.704 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:09.704 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:09.704 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:09.963 17:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:10.221 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.222 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:10.480 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:10.481 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:10.481 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:10.481 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:10.481 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.481 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:10.481 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:10.481 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.742 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.043 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:11.043 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:11.043 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:11.043 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:11.043 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:11.043 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.043 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:11.043 17:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.346 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.626 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:11.626 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:11.626 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.626 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:11.626 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:11.626 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:11.626 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:11.626 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.885 17:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:12.144 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:12.144 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:12.144 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:12.144 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.144 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:12.144 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:12.144 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:12.144 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:12.402 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.402 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.402 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.402 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.402 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.402 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.402 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.402 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:12.661 rmmod nvme_tcp 00:06:12.661 rmmod nvme_fabrics 00:06:12.661 rmmod nvme_keyring 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1357614 ']' 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1357614 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1357614 ']' 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1357614 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1357614 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1357614' 00:06:12.661 killing process with pid 1357614 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1357614 00:06:12.661 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1357614 00:06:12.920 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:12.920 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:12.920 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:12.920 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:12.920 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:12.920 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:12.920 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:12.920 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:12.920 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:12.920 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:12.920 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:12.920 17:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:14.826 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:14.826 00:06:14.826 real 0m47.946s 00:06:14.826 user 3m43.232s 00:06:14.826 sys 0m15.901s 00:06:14.826 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.826 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:14.826 ************************************ 00:06:14.826 END TEST nvmf_ns_hotplug_stress 00:06:14.826 ************************************ 00:06:14.826 17:55:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:14.826 17:55:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:14.826 17:55:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.826 17:55:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:15.085 ************************************ 00:06:15.085 START TEST nvmf_delete_subsystem 00:06:15.085 ************************************ 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:15.086 * Looking for test storage... 00:06:15.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:15.086 17:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:15.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.086 --rc genhtml_branch_coverage=1 00:06:15.086 --rc genhtml_function_coverage=1 00:06:15.086 --rc genhtml_legend=1 00:06:15.086 --rc geninfo_all_blocks=1 00:06:15.086 --rc geninfo_unexecuted_blocks=1 00:06:15.086 00:06:15.086 ' 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:15.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.086 --rc genhtml_branch_coverage=1 00:06:15.086 --rc genhtml_function_coverage=1 00:06:15.086 --rc genhtml_legend=1 00:06:15.086 --rc geninfo_all_blocks=1 00:06:15.086 --rc geninfo_unexecuted_blocks=1 00:06:15.086 00:06:15.086 ' 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:15.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.086 --rc genhtml_branch_coverage=1 00:06:15.086 --rc genhtml_function_coverage=1 00:06:15.086 --rc genhtml_legend=1 00:06:15.086 --rc geninfo_all_blocks=1 00:06:15.086 --rc geninfo_unexecuted_blocks=1 00:06:15.086 00:06:15.086 ' 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:15.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.086 --rc genhtml_branch_coverage=1 00:06:15.086 --rc genhtml_function_coverage=1 00:06:15.086 --rc genhtml_legend=1 00:06:15.086 --rc geninfo_all_blocks=1 00:06:15.086 --rc geninfo_unexecuted_blocks=1 00:06:15.086 00:06:15.086 ' 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:15.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:15.086 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:15.087 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:15.087 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:15.087 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:15.087 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:15.087 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:15.087 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:15.087 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:15.087 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:15.087 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.087 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:15.087 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.087 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:15.087 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:15.087 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:15.087 17:55:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:17.623 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:17.623 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:17.623 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:17.623 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:17.623 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:17.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:17.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:06:17.623 00:06:17.623 --- 10.0.0.2 ping statistics --- 00:06:17.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.624 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:17.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:17.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:06:17.624 00:06:17.624 --- 10.0.0.1 ping statistics --- 00:06:17.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.624 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1364996 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1364996 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1364996 ']' 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.624 [2024-12-09 17:55:40.408261] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:06:17.624 [2024-12-09 17:55:40.408364] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.624 [2024-12-09 17:55:40.481924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.624 [2024-12-09 17:55:40.540906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:17.624 [2024-12-09 17:55:40.540963] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:17.624 [2024-12-09 17:55:40.540992] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:17.624 [2024-12-09 17:55:40.541003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:17.624 [2024-12-09 17:55:40.541013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:17.624 [2024-12-09 17:55:40.542514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.624 [2024-12-09 17:55:40.542520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:17.624 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.882 [2024-12-09 17:55:40.689479] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.882 [2024-12-09 17:55:40.705721] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.882 NULL1 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.882 Delay0 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.882 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.883 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.883 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.883 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1365138 00:06:17.883 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:17.883 17:55:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:17.883 [2024-12-09 17:55:40.790522] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:19.782 17:55:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:19.782 17:55:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.782 17:55:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 Write completed with error (sct=0, sc=8) 00:06:20.039 starting I/O failed: -6 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 Write completed with error (sct=0, sc=8) 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 starting I/O failed: -6 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 Write completed with error (sct=0, sc=8) 00:06:20.039 Write completed with error (sct=0, sc=8) 00:06:20.039 starting I/O failed: -6 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 starting I/O failed: -6 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 starting I/O failed: -6 00:06:20.039 Write completed with error (sct=0, sc=8) 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 starting I/O failed: -6 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 Write completed with error (sct=0, sc=8) 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 starting I/O failed: -6 00:06:20.039 Write completed with error (sct=0, sc=8) 00:06:20.039 Write completed with error (sct=0, sc=8) 00:06:20.039 Write completed with error (sct=0, sc=8) 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 starting I/O failed: -6 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 Write completed with error (sct=0, sc=8) 00:06:20.039 starting I/O failed: -6 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 Read completed with error (sct=0, sc=8) 00:06:20.039 [2024-12-09 17:55:42.911047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16274a0 is same with the state(6) to be set 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 starting I/O failed: -6 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 starting I/O failed: -6 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 starting I/O failed: -6 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 starting I/O failed: -6 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 starting I/O failed: -6 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 starting I/O failed: -6 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 starting I/O failed: -6 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 starting I/O failed: -6 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 starting I/O failed: -6 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 starting I/O failed: -6 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 [2024-12-09 17:55:42.913070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f52a400d4b0 is same with the state(6) to be set 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.040 [2024-12-09 17:55:42.913485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1627860 is same with the state(6) to be set 00:06:20.040 Write completed with error (sct=0, sc=8) 00:06:20.040 Read completed with error (sct=0, sc=8) 00:06:20.973 [2024-12-09 17:55:43.884587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16289b0 is same with the state(6) to be set 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Write completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Write completed with error (sct=0, sc=8) 00:06:20.973 Write completed with error (sct=0, sc=8) 00:06:20.973 Write completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Write completed with error (sct=0, sc=8) 00:06:20.973 [2024-12-09 17:55:43.914401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16272c0 is same with the state(6) to be set 00:06:20.973 Write completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Write completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Write completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Write completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Write completed with error (sct=0, sc=8) 00:06:20.973 [2024-12-09 17:55:43.915755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f52a400d7e0 is same with the state(6) to be set 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Write completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Write completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 [2024-12-09 17:55:43.915893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1627680 is same with the state(6) to be set 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Write completed with error (sct=0, sc=8) 00:06:20.973 Write completed with error (sct=0, sc=8) 00:06:20.973 Write completed with error (sct=0, sc=8) 00:06:20.973 Write completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Write completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 Read completed with error (sct=0, sc=8) 00:06:20.973 [2024-12-09 17:55:43.916304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f52a400d020 is same with the state(6) to be set 00:06:20.973 Initializing NVMe Controllers 00:06:20.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:20.973 Controller IO queue size 128, less than required. 00:06:20.973 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:20.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:20.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:20.973 Initialization complete. Launching workers. 00:06:20.973 ======================================================== 00:06:20.973 Latency(us) 00:06:20.973 Device Information : IOPS MiB/s Average min max 00:06:20.973 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 152.97 0.07 938055.26 2447.06 1011791.73 00:06:20.973 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.44 0.08 921584.52 443.51 1011886.32 00:06:20.973 ======================================================== 00:06:20.973 Total : 311.41 0.15 929675.41 443.51 1011886.32 00:06:20.973 00:06:20.973 [2024-12-09 17:55:43.917043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16289b0 (9): Bad file descriptor 00:06:20.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:20.973 17:55:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.973 17:55:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:20.973 17:55:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1365138 00:06:20.973 17:55:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1365138 00:06:21.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1365138) - No such process 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1365138 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1365138 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1365138 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.539 [2024-12-09 17:55:44.440654] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1365540 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1365540 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:21.539 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:21.539 [2024-12-09 17:55:44.514029] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:22.104 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:22.104 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1365540 00:06:22.104 17:55:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:22.670 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:22.670 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1365540 00:06:22.670 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:22.928 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:22.928 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1365540 00:06:22.928 17:55:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:23.493 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:23.493 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1365540 00:06:23.493 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:24.059 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:24.059 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1365540 00:06:24.059 17:55:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:24.631 17:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:24.631 17:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1365540 00:06:24.631 17:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:24.890 Initializing NVMe Controllers 00:06:24.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:24.890 Controller IO queue size 128, less than required. 00:06:24.890 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:24.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:24.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:24.890 Initialization complete. Launching workers. 00:06:24.890 ======================================================== 00:06:24.890 Latency(us) 00:06:24.890 Device Information : IOPS MiB/s Average min max 00:06:24.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004191.56 1000218.75 1012659.72 00:06:24.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004814.78 1000195.54 1043017.88 00:06:24.890 ======================================================== 00:06:24.890 Total : 256.00 0.12 1004503.17 1000195.54 1043017.88 00:06:24.890 00:06:25.148 17:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:25.148 17:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1365540 00:06:25.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1365540) - No such process 00:06:25.148 17:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1365540 00:06:25.148 17:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:25.148 17:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:25.148 17:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:25.148 17:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:25.148 17:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:25.148 17:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:25.148 17:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:25.148 17:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:25.148 rmmod nvme_tcp 00:06:25.148 rmmod nvme_fabrics 00:06:25.148 rmmod nvme_keyring 00:06:25.148 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:25.148 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:25.148 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:25.148 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1364996 ']' 00:06:25.148 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1364996 00:06:25.149 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1364996 ']' 00:06:25.149 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1364996 00:06:25.149 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:25.149 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.149 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1364996 00:06:25.149 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.149 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.149 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1364996' 00:06:25.149 killing process with pid 1364996 00:06:25.149 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1364996 00:06:25.149 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1364996 00:06:25.409 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:25.409 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:25.409 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:25.409 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:25.409 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:25.409 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:25.409 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:25.409 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:25.409 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:25.409 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.409 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:25.409 17:55:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.322 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:27.322 00:06:27.322 real 0m12.453s 00:06:27.322 user 0m27.969s 00:06:27.322 sys 0m2.977s 00:06:27.322 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.322 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.322 ************************************ 00:06:27.322 END TEST nvmf_delete_subsystem 00:06:27.322 ************************************ 00:06:27.322 17:55:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:27.322 17:55:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:27.322 17:55:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.322 17:55:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:27.581 ************************************ 00:06:27.581 START TEST nvmf_host_management 00:06:27.581 ************************************ 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:27.581 * Looking for test storage... 00:06:27.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:27.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.581 --rc genhtml_branch_coverage=1 00:06:27.581 --rc genhtml_function_coverage=1 00:06:27.581 --rc genhtml_legend=1 00:06:27.581 --rc geninfo_all_blocks=1 00:06:27.581 --rc geninfo_unexecuted_blocks=1 00:06:27.581 00:06:27.581 ' 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:27.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.581 --rc genhtml_branch_coverage=1 00:06:27.581 --rc genhtml_function_coverage=1 00:06:27.581 --rc genhtml_legend=1 00:06:27.581 --rc geninfo_all_blocks=1 00:06:27.581 --rc geninfo_unexecuted_blocks=1 00:06:27.581 00:06:27.581 ' 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:27.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.581 --rc genhtml_branch_coverage=1 00:06:27.581 --rc genhtml_function_coverage=1 00:06:27.581 --rc genhtml_legend=1 00:06:27.581 --rc geninfo_all_blocks=1 00:06:27.581 --rc geninfo_unexecuted_blocks=1 00:06:27.581 00:06:27.581 ' 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:27.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.581 --rc genhtml_branch_coverage=1 00:06:27.581 --rc genhtml_function_coverage=1 00:06:27.581 --rc genhtml_legend=1 00:06:27.581 --rc geninfo_all_blocks=1 00:06:27.581 --rc geninfo_unexecuted_blocks=1 00:06:27.581 00:06:27.581 ' 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.581 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:27.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:27.582 17:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:30.117 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:30.117 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:30.117 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:30.117 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:30.117 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:30.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:30.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:06:30.118 00:06:30.118 --- 10.0.0.2 ping statistics --- 00:06:30.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.118 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:30.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:30.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:06:30.118 00:06:30.118 --- 10.0.0.1 ping statistics --- 00:06:30.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.118 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1367909 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1367909 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1367909 ']' 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.118 17:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.118 [2024-12-09 17:55:52.886996] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:06:30.118 [2024-12-09 17:55:52.887078] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:30.118 [2024-12-09 17:55:52.956146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.118 [2024-12-09 17:55:53.011259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:30.118 [2024-12-09 17:55:53.011331] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:30.118 [2024-12-09 17:55:53.011359] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:30.118 [2024-12-09 17:55:53.011371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:30.118 [2024-12-09 17:55:53.011381] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:30.118 [2024-12-09 17:55:53.013059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.118 [2024-12-09 17:55:53.013168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.118 [2024-12-09 17:55:53.013258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:30.118 [2024-12-09 17:55:53.013266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.118 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.118 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:30.118 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:30.118 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:30.118 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.377 [2024-12-09 17:55:53.161025] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.377 Malloc0 00:06:30.377 [2024-12-09 17:55:53.233771] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1368068 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1368068 /var/tmp/bdevperf.sock 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1368068 ']' 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:30.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:30.377 { 00:06:30.377 "params": { 00:06:30.377 "name": "Nvme$subsystem", 00:06:30.377 "trtype": "$TEST_TRANSPORT", 00:06:30.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:30.377 "adrfam": "ipv4", 00:06:30.377 "trsvcid": "$NVMF_PORT", 00:06:30.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:30.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:30.377 "hdgst": ${hdgst:-false}, 00:06:30.377 "ddgst": ${ddgst:-false} 00:06:30.377 }, 00:06:30.377 "method": "bdev_nvme_attach_controller" 00:06:30.377 } 00:06:30.377 EOF 00:06:30.377 )") 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:30.377 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:30.377 "params": { 00:06:30.377 "name": "Nvme0", 00:06:30.377 "trtype": "tcp", 00:06:30.377 "traddr": "10.0.0.2", 00:06:30.377 "adrfam": "ipv4", 00:06:30.377 "trsvcid": "4420", 00:06:30.377 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:30.377 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:30.377 "hdgst": false, 00:06:30.377 "ddgst": false 00:06:30.377 }, 00:06:30.377 "method": "bdev_nvme_attach_controller" 00:06:30.377 }' 00:06:30.377 [2024-12-09 17:55:53.316080] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:06:30.377 [2024-12-09 17:55:53.316155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1368068 ] 00:06:30.377 [2024-12-09 17:55:53.385753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.636 [2024-12-09 17:55:53.445899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.636 Running I/O for 10 seconds... 00:06:30.636 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.636 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:30.636 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:30.636 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.636 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.894 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.894 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:30.894 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:30.894 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:30.894 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:30.894 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:30.894 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:30.894 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:30.894 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:30.894 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:30.894 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:30.894 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.894 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.894 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.895 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:30.895 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:30.895 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:31.154 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:31.154 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:31.154 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:31.154 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:31.154 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.154 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.154 17:55:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.155 17:55:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:06:31.155 17:55:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:06:31.155 17:55:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:31.155 17:55:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:31.155 17:55:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:31.155 17:55:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:31.155 17:55:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.155 17:55:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.155 [2024-12-09 17:55:54.028794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.028848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.028887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.028902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.028928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.028942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.028958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.028972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.028988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.029002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.029018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.029032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.029047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.029061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.029076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.029090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.029105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.029119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.029147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.029162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.029177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.029191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.029206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.029220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.029234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.029248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.029263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.029276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.029291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.029305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.029320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.029333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.029349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.029363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.029378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.029391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.029406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.029419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.029435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.029448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.029463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.029476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.029491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.029509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.029525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.029538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.029561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.029576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.029591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.155 [2024-12-09 17:55:54.029609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.155 [2024-12-09 17:55:54.029624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.029637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.029653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.029666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.029681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.029694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.029709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.029723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.029737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.029751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.029766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.029780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.029795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.029808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.029823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.029836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.029859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.029872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.029891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.029905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.029921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.029935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.029950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.029963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.029978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.029992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.030007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.030021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.030036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.030049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.030064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.030078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.030093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.030107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.030122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.030136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.030151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.030165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.030180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.030194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.030209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.030222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.030237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.030255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.030271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.030285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.030300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.030314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.030329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.030343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.030358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.030372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.030387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.030400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.030415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.030429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.030444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.030458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.156 [2024-12-09 17:55:54.030474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.156 [2024-12-09 17:55:54.030487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.157 [2024-12-09 17:55:54.030503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.157 [2024-12-09 17:55:54.030517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.157 [2024-12-09 17:55:54.030532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.157 [2024-12-09 17:55:54.030552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.157 [2024-12-09 17:55:54.030571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.157 [2024-12-09 17:55:54.030585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.157 [2024-12-09 17:55:54.030606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.157 [2024-12-09 17:55:54.030620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.157 [2024-12-09 17:55:54.030635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.157 [2024-12-09 17:55:54.030652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.157 [2024-12-09 17:55:54.030668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.157 [2024-12-09 17:55:54.030682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.157 [2024-12-09 17:55:54.030697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.157 [2024-12-09 17:55:54.030710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.157 [2024-12-09 17:55:54.030726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.157 [2024-12-09 17:55:54.030739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.157 [2024-12-09 17:55:54.030754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.157 [2024-12-09 17:55:54.030769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.157 [2024-12-09 17:55:54.032036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:31.157 17:55:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.157 17:55:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:31.157 17:55:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.157 17:55:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.157 task offset: 77824 on job bdev=Nvme0n1 fails 00:06:31.157 00:06:31.157 Latency(us) 00:06:31.157 [2024-12-09T16:55:54.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:31.157 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:31.157 Job: Nvme0n1 ended in about 0.40 seconds with error 00:06:31.157 Verification LBA range: start 0x0 length 0x400 00:06:31.157 Nvme0n1 : 0.40 1439.74 89.98 159.97 0.00 38868.36 2694.26 39224.51 00:06:31.157 [2024-12-09T16:55:54.198Z] =================================================================================================================== 00:06:31.157 [2024-12-09T16:55:54.198Z] Total : 1439.74 89.98 159.97 0.00 38868.36 2694.26 39224.51 00:06:31.157 [2024-12-09 17:55:54.033978] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:31.157 [2024-12-09 17:55:54.034008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1121660 (9): Bad file descriptor 00:06:31.157 17:55:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.157 17:55:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:31.157 [2024-12-09 17:55:54.166695] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:32.091 17:55:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1368068 00:06:32.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1368068) - No such process 00:06:32.091 17:55:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:32.091 17:55:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:32.091 17:55:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:32.091 17:55:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:32.091 17:55:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:32.091 17:55:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:32.091 17:55:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:32.091 17:55:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:32.091 { 00:06:32.091 "params": { 00:06:32.091 "name": "Nvme$subsystem", 00:06:32.091 "trtype": "$TEST_TRANSPORT", 00:06:32.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:32.091 "adrfam": "ipv4", 00:06:32.091 "trsvcid": "$NVMF_PORT", 00:06:32.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:32.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:32.091 "hdgst": ${hdgst:-false}, 00:06:32.091 "ddgst": ${ddgst:-false} 00:06:32.091 }, 00:06:32.091 "method": "bdev_nvme_attach_controller" 00:06:32.091 } 00:06:32.091 EOF 00:06:32.091 )") 00:06:32.091 17:55:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:32.091 17:55:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:32.091 17:55:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:32.091 17:55:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:32.091 "params": { 00:06:32.091 "name": "Nvme0", 00:06:32.091 "trtype": "tcp", 00:06:32.091 "traddr": "10.0.0.2", 00:06:32.091 "adrfam": "ipv4", 00:06:32.091 "trsvcid": "4420", 00:06:32.091 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:32.091 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:32.091 "hdgst": false, 00:06:32.091 "ddgst": false 00:06:32.091 }, 00:06:32.091 "method": "bdev_nvme_attach_controller" 00:06:32.091 }' 00:06:32.091 [2024-12-09 17:55:55.089293] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:06:32.091 [2024-12-09 17:55:55.089372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1368233 ] 00:06:32.349 [2024-12-09 17:55:55.159559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.349 [2024-12-09 17:55:55.218481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.608 Running I/O for 1 seconds... 00:06:33.677 1664.00 IOPS, 104.00 MiB/s 00:06:33.677 Latency(us) 00:06:33.677 [2024-12-09T16:55:56.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:33.677 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:33.677 Verification LBA range: start 0x0 length 0x400 00:06:33.677 Nvme0n1 : 1.02 1700.79 106.30 0.00 0.00 37014.03 6213.78 33787.45 00:06:33.677 [2024-12-09T16:55:56.718Z] =================================================================================================================== 00:06:33.677 [2024-12-09T16:55:56.718Z] Total : 1700.79 106.30 0.00 0.00 37014.03 6213.78 33787.45 00:06:33.677 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:33.677 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:33.677 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:33.677 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:33.677 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:33.677 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:33.677 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:33.677 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:33.677 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:33.677 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:33.677 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:33.677 rmmod nvme_tcp 00:06:33.935 rmmod nvme_fabrics 00:06:33.935 rmmod nvme_keyring 00:06:33.935 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:33.935 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:33.935 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:33.935 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1367909 ']' 00:06:33.935 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1367909 00:06:33.935 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1367909 ']' 00:06:33.935 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1367909 00:06:33.935 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:33.935 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.935 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1367909 00:06:33.935 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:33.935 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:33.935 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1367909' 00:06:33.935 killing process with pid 1367909 00:06:33.935 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1367909 00:06:33.935 17:55:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1367909 00:06:34.195 [2024-12-09 17:55:57.035726] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:34.195 17:55:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:34.195 17:55:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:34.195 17:55:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:34.195 17:55:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:34.195 17:55:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:34.195 17:55:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:34.195 17:55:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:34.195 17:55:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:34.195 17:55:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:34.195 17:55:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:34.195 17:55:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:34.195 17:55:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.103 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:36.103 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:36.103 00:06:36.103 real 0m8.744s 00:06:36.103 user 0m19.175s 00:06:36.103 sys 0m2.808s 00:06:36.103 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.103 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.103 ************************************ 00:06:36.103 END TEST nvmf_host_management 00:06:36.103 ************************************ 00:06:36.103 17:55:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:36.103 17:55:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:36.362 ************************************ 00:06:36.362 START TEST nvmf_lvol 00:06:36.362 ************************************ 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:36.362 * Looking for test storage... 00:06:36.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:36.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.362 --rc genhtml_branch_coverage=1 00:06:36.362 --rc genhtml_function_coverage=1 00:06:36.362 --rc genhtml_legend=1 00:06:36.362 --rc geninfo_all_blocks=1 00:06:36.362 --rc geninfo_unexecuted_blocks=1 00:06:36.362 00:06:36.362 ' 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:36.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.362 --rc genhtml_branch_coverage=1 00:06:36.362 --rc genhtml_function_coverage=1 00:06:36.362 --rc genhtml_legend=1 00:06:36.362 --rc geninfo_all_blocks=1 00:06:36.362 --rc geninfo_unexecuted_blocks=1 00:06:36.362 00:06:36.362 ' 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:36.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.362 --rc genhtml_branch_coverage=1 00:06:36.362 --rc genhtml_function_coverage=1 00:06:36.362 --rc genhtml_legend=1 00:06:36.362 --rc geninfo_all_blocks=1 00:06:36.362 --rc geninfo_unexecuted_blocks=1 00:06:36.362 00:06:36.362 ' 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:36.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.362 --rc genhtml_branch_coverage=1 00:06:36.362 --rc genhtml_function_coverage=1 00:06:36.362 --rc genhtml_legend=1 00:06:36.362 --rc geninfo_all_blocks=1 00:06:36.362 --rc geninfo_unexecuted_blocks=1 00:06:36.362 00:06:36.362 ' 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.362 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:36.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:36.363 17:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:38.898 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:38.898 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.898 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:38.899 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:38.899 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:38.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:38.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:06:38.899 00:06:38.899 --- 10.0.0.2 ping statistics --- 00:06:38.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.899 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:38.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:38.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:06:38.899 00:06:38.899 --- 10.0.0.1 ping statistics --- 00:06:38.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.899 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1370435 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1370435 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1370435 ']' 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:38.899 [2024-12-09 17:56:01.671264] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:06:38.899 [2024-12-09 17:56:01.671363] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.899 [2024-12-09 17:56:01.745096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.899 [2024-12-09 17:56:01.804429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:38.899 [2024-12-09 17:56:01.804480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:38.899 [2024-12-09 17:56:01.804509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:38.899 [2024-12-09 17:56:01.804521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:38.899 [2024-12-09 17:56:01.804531] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:38.899 [2024-12-09 17:56:01.806018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.899 [2024-12-09 17:56:01.806040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.899 [2024-12-09 17:56:01.806044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:38.899 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:39.181 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:39.182 17:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:39.495 [2024-12-09 17:56:02.211735] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.495 17:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:39.753 17:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:39.753 17:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:40.011 17:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:40.011 17:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:40.268 17:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:40.527 17:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=50797164-6e9f-4e32-a2b4-abad2d243105 00:06:40.527 17:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 50797164-6e9f-4e32-a2b4-abad2d243105 lvol 20 00:06:40.784 17:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=14010365-d1fc-42de-ab85-39d85b39d43c 00:06:40.784 17:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:41.042 17:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 14010365-d1fc-42de-ab85-39d85b39d43c 00:06:41.299 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:41.557 [2024-12-09 17:56:04.448945] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:41.557 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:41.814 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1370868 00:06:41.815 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:41.815 17:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:42.748 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 14010365-d1fc-42de-ab85-39d85b39d43c MY_SNAPSHOT 00:06:43.006 17:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a8dcb489-3617-4a5e-96ac-233282c740b5 00:06:43.006 17:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 14010365-d1fc-42de-ab85-39d85b39d43c 30 00:06:43.572 17:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a8dcb489-3617-4a5e-96ac-233282c740b5 MY_CLONE 00:06:43.830 17:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1afa5b3f-33a6-47e3-ade4-51d079d32667 00:06:43.830 17:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1afa5b3f-33a6-47e3-ade4-51d079d32667 00:06:44.397 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1370868 00:06:52.509 Initializing NVMe Controllers 00:06:52.509 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:52.509 Controller IO queue size 128, less than required. 00:06:52.509 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:52.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:52.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:52.509 Initialization complete. Launching workers. 00:06:52.509 ======================================================== 00:06:52.509 Latency(us) 00:06:52.509 Device Information : IOPS MiB/s Average min max 00:06:52.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10482.70 40.95 12216.98 2168.52 70929.76 00:06:52.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10427.50 40.73 12282.96 2661.87 67311.47 00:06:52.509 ======================================================== 00:06:52.509 Total : 20910.20 81.68 12249.89 2168.52 70929.76 00:06:52.509 00:06:52.509 17:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:52.509 17:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 14010365-d1fc-42de-ab85-39d85b39d43c 00:06:52.767 17:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 50797164-6e9f-4e32-a2b4-abad2d243105 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:53.333 rmmod nvme_tcp 00:06:53.333 rmmod nvme_fabrics 00:06:53.333 rmmod nvme_keyring 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1370435 ']' 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1370435 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1370435 ']' 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1370435 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1370435 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1370435' 00:06:53.333 killing process with pid 1370435 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1370435 00:06:53.333 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1370435 00:06:53.592 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:53.592 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:53.592 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:53.592 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:53.592 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:53.592 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:53.592 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:53.592 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:53.592 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:53.592 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.592 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:53.592 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.500 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:55.500 00:06:55.500 real 0m19.353s 00:06:55.500 user 1m5.714s 00:06:55.500 sys 0m5.581s 00:06:55.500 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.500 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:55.500 ************************************ 00:06:55.500 END TEST nvmf_lvol 00:06:55.500 ************************************ 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:55.760 ************************************ 00:06:55.760 START TEST nvmf_lvs_grow 00:06:55.760 ************************************ 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:55.760 * Looking for test storage... 00:06:55.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:55.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.760 --rc genhtml_branch_coverage=1 00:06:55.760 --rc genhtml_function_coverage=1 00:06:55.760 --rc genhtml_legend=1 00:06:55.760 --rc geninfo_all_blocks=1 00:06:55.760 --rc geninfo_unexecuted_blocks=1 00:06:55.760 00:06:55.760 ' 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:55.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.760 --rc genhtml_branch_coverage=1 00:06:55.760 --rc genhtml_function_coverage=1 00:06:55.760 --rc genhtml_legend=1 00:06:55.760 --rc geninfo_all_blocks=1 00:06:55.760 --rc geninfo_unexecuted_blocks=1 00:06:55.760 00:06:55.760 ' 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:55.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.760 --rc genhtml_branch_coverage=1 00:06:55.760 --rc genhtml_function_coverage=1 00:06:55.760 --rc genhtml_legend=1 00:06:55.760 --rc geninfo_all_blocks=1 00:06:55.760 --rc geninfo_unexecuted_blocks=1 00:06:55.760 00:06:55.760 ' 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:55.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.760 --rc genhtml_branch_coverage=1 00:06:55.760 --rc genhtml_function_coverage=1 00:06:55.760 --rc genhtml_legend=1 00:06:55.760 --rc geninfo_all_blocks=1 00:06:55.760 --rc geninfo_unexecuted_blocks=1 00:06:55.760 00:06:55.760 ' 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.760 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:55.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:55.761 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:58.295 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:58.295 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:58.295 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:58.295 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:58.295 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:58.295 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:58.295 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:58.295 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:58.295 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:58.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:58.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:06:58.295 00:06:58.295 --- 10.0.0.2 ping statistics --- 00:06:58.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.296 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:58.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:58.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:06:58.296 00:06:58.296 --- 10.0.0.1 ping statistics --- 00:06:58.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.296 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1374151 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1374151 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1374151 ']' 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.296 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:58.296 [2024-12-09 17:56:21.122215] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:06:58.296 [2024-12-09 17:56:21.122313] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.296 [2024-12-09 17:56:21.196453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.296 [2024-12-09 17:56:21.256390] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:58.296 [2024-12-09 17:56:21.256443] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:58.296 [2024-12-09 17:56:21.256471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:58.296 [2024-12-09 17:56:21.256482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:58.296 [2024-12-09 17:56:21.256492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:58.296 [2024-12-09 17:56:21.257122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.554 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.554 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:58.554 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:58.554 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:58.554 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:58.554 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:58.554 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:58.812 [2024-12-09 17:56:21.638142] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.812 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:58.812 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.812 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.812 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:58.812 ************************************ 00:06:58.812 START TEST lvs_grow_clean 00:06:58.812 ************************************ 00:06:58.812 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:58.812 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:58.812 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:58.812 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:58.812 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:58.812 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:58.812 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:58.812 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:58.812 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:58.812 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:59.069 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:59.069 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:59.327 17:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=cf97972d-a444-457c-92b4-c5f21d78a4d9 00:06:59.327 17:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf97972d-a444-457c-92b4-c5f21d78a4d9 00:06:59.327 17:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:59.585 17:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:59.585 17:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:59.585 17:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cf97972d-a444-457c-92b4-c5f21d78a4d9 lvol 150 00:06:59.843 17:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a8169b8b-2f56-4784-ba47-adaf3d031ec1 00:06:59.843 17:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:59.843 17:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:00.100 [2024-12-09 17:56:23.056996] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:00.100 [2024-12-09 17:56:23.057075] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:00.100 true 00:07:00.100 17:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf97972d-a444-457c-92b4-c5f21d78a4d9 00:07:00.100 17:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:00.358 17:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:00.358 17:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:00.616 17:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a8169b8b-2f56-4784-ba47-adaf3d031ec1 00:07:00.874 17:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:01.132 [2024-12-09 17:56:24.132241] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:01.132 17:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:01.390 17:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1374596 00:07:01.390 17:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:01.390 17:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:01.390 17:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1374596 /var/tmp/bdevperf.sock 00:07:01.390 17:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1374596 ']' 00:07:01.390 17:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:01.390 17:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.390 17:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:01.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:01.390 17:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.390 17:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:01.648 [2024-12-09 17:56:24.458059] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:07:01.648 [2024-12-09 17:56:24.458142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1374596 ] 00:07:01.648 [2024-12-09 17:56:24.522915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.648 [2024-12-09 17:56:24.579697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.906 17:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.906 17:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:01.906 17:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:02.164 Nvme0n1 00:07:02.164 17:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:02.422 [ 00:07:02.422 { 00:07:02.422 "name": "Nvme0n1", 00:07:02.422 "aliases": [ 00:07:02.422 "a8169b8b-2f56-4784-ba47-adaf3d031ec1" 00:07:02.422 ], 00:07:02.422 "product_name": "NVMe disk", 00:07:02.422 "block_size": 4096, 00:07:02.422 "num_blocks": 38912, 00:07:02.422 "uuid": "a8169b8b-2f56-4784-ba47-adaf3d031ec1", 00:07:02.422 "numa_id": 0, 00:07:02.422 "assigned_rate_limits": { 00:07:02.422 "rw_ios_per_sec": 0, 00:07:02.422 "rw_mbytes_per_sec": 0, 00:07:02.422 "r_mbytes_per_sec": 0, 00:07:02.422 "w_mbytes_per_sec": 0 00:07:02.422 }, 00:07:02.422 "claimed": false, 00:07:02.422 "zoned": false, 00:07:02.422 "supported_io_types": { 00:07:02.422 "read": true, 00:07:02.422 "write": true, 00:07:02.422 "unmap": true, 00:07:02.422 "flush": true, 00:07:02.422 "reset": true, 00:07:02.422 "nvme_admin": true, 00:07:02.422 "nvme_io": true, 00:07:02.422 "nvme_io_md": false, 00:07:02.422 "write_zeroes": true, 00:07:02.422 "zcopy": false, 00:07:02.422 "get_zone_info": false, 00:07:02.422 "zone_management": false, 00:07:02.422 "zone_append": false, 00:07:02.422 "compare": true, 00:07:02.422 "compare_and_write": true, 00:07:02.422 "abort": true, 00:07:02.422 "seek_hole": false, 00:07:02.422 "seek_data": false, 00:07:02.422 "copy": true, 00:07:02.422 "nvme_iov_md": false 00:07:02.422 }, 00:07:02.422 "memory_domains": [ 00:07:02.422 { 00:07:02.422 "dma_device_id": "system", 00:07:02.422 "dma_device_type": 1 00:07:02.422 } 00:07:02.422 ], 00:07:02.422 "driver_specific": { 00:07:02.422 "nvme": [ 00:07:02.422 { 00:07:02.422 "trid": { 00:07:02.422 "trtype": "TCP", 00:07:02.422 "adrfam": "IPv4", 00:07:02.422 "traddr": "10.0.0.2", 00:07:02.422 "trsvcid": "4420", 00:07:02.422 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:02.422 }, 00:07:02.422 "ctrlr_data": { 00:07:02.422 "cntlid": 1, 00:07:02.422 "vendor_id": "0x8086", 00:07:02.422 "model_number": "SPDK bdev Controller", 00:07:02.422 "serial_number": "SPDK0", 00:07:02.422 "firmware_revision": "25.01", 00:07:02.422 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:02.422 "oacs": { 00:07:02.422 "security": 0, 00:07:02.422 "format": 0, 00:07:02.422 "firmware": 0, 00:07:02.422 "ns_manage": 0 00:07:02.422 }, 00:07:02.422 "multi_ctrlr": true, 00:07:02.422 "ana_reporting": false 00:07:02.422 }, 00:07:02.422 "vs": { 00:07:02.422 "nvme_version": "1.3" 00:07:02.422 }, 00:07:02.422 "ns_data": { 00:07:02.422 "id": 1, 00:07:02.422 "can_share": true 00:07:02.422 } 00:07:02.422 } 00:07:02.422 ], 00:07:02.422 "mp_policy": "active_passive" 00:07:02.422 } 00:07:02.422 } 00:07:02.422 ] 00:07:02.422 17:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1374733 00:07:02.422 17:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:02.422 17:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:02.422 Running I/O for 10 seconds... 00:07:03.797 Latency(us) 00:07:03.797 [2024-12-09T16:56:26.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:03.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.797 Nvme0n1 : 1.00 15496.00 60.53 0.00 0.00 0.00 0.00 0.00 00:07:03.797 [2024-12-09T16:56:26.839Z] =================================================================================================================== 00:07:03.798 [2024-12-09T16:56:26.839Z] Total : 15496.00 60.53 0.00 0.00 0.00 0.00 0.00 00:07:03.798 00:07:04.364 17:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cf97972d-a444-457c-92b4-c5f21d78a4d9 00:07:04.623 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.623 Nvme0n1 : 2.00 15685.50 61.27 0.00 0.00 0.00 0.00 0.00 00:07:04.623 [2024-12-09T16:56:27.664Z] =================================================================================================================== 00:07:04.623 [2024-12-09T16:56:27.664Z] Total : 15685.50 61.27 0.00 0.00 0.00 0.00 0.00 00:07:04.623 00:07:04.623 true 00:07:04.623 17:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf97972d-a444-457c-92b4-c5f21d78a4d9 00:07:04.623 17:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:04.883 17:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:04.883 17:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:04.883 17:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1374733 00:07:05.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.450 Nvme0n1 : 3.00 15791.00 61.68 0.00 0.00 0.00 0.00 0.00 00:07:05.450 [2024-12-09T16:56:28.491Z] =================================================================================================================== 00:07:05.450 [2024-12-09T16:56:28.491Z] Total : 15791.00 61.68 0.00 0.00 0.00 0.00 0.00 00:07:05.450 00:07:06.826 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.826 Nvme0n1 : 4.00 15862.00 61.96 0.00 0.00 0.00 0.00 0.00 00:07:06.826 [2024-12-09T16:56:29.867Z] =================================================================================================================== 00:07:06.826 [2024-12-09T16:56:29.867Z] Total : 15862.00 61.96 0.00 0.00 0.00 0.00 0.00 00:07:06.826 00:07:07.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.759 Nvme0n1 : 5.00 15866.00 61.98 0.00 0.00 0.00 0.00 0.00 00:07:07.759 [2024-12-09T16:56:30.800Z] =================================================================================================================== 00:07:07.759 [2024-12-09T16:56:30.800Z] Total : 15866.00 61.98 0.00 0.00 0.00 0.00 0.00 00:07:07.759 00:07:08.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.693 Nvme0n1 : 6.00 15888.67 62.07 0.00 0.00 0.00 0.00 0.00 00:07:08.693 [2024-12-09T16:56:31.734Z] =================================================================================================================== 00:07:08.693 [2024-12-09T16:56:31.734Z] Total : 15888.67 62.07 0.00 0.00 0.00 0.00 0.00 00:07:08.693 00:07:09.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.628 Nvme0n1 : 7.00 15952.14 62.31 0.00 0.00 0.00 0.00 0.00 00:07:09.628 [2024-12-09T16:56:32.669Z] =================================================================================================================== 00:07:09.628 [2024-12-09T16:56:32.669Z] Total : 15952.14 62.31 0.00 0.00 0.00 0.00 0.00 00:07:09.628 00:07:10.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.563 Nvme0n1 : 8.00 16006.00 62.52 0.00 0.00 0.00 0.00 0.00 00:07:10.563 [2024-12-09T16:56:33.604Z] =================================================================================================================== 00:07:10.563 [2024-12-09T16:56:33.604Z] Total : 16006.00 62.52 0.00 0.00 0.00 0.00 0.00 00:07:10.563 00:07:11.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.497 Nvme0n1 : 9.00 16056.11 62.72 0.00 0.00 0.00 0.00 0.00 00:07:11.497 [2024-12-09T16:56:34.538Z] =================================================================================================================== 00:07:11.497 [2024-12-09T16:56:34.538Z] Total : 16056.11 62.72 0.00 0.00 0.00 0.00 0.00 00:07:11.497 00:07:12.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.433 Nvme0n1 : 10.00 16082.60 62.82 0.00 0.00 0.00 0.00 0.00 00:07:12.433 [2024-12-09T16:56:35.474Z] =================================================================================================================== 00:07:12.433 [2024-12-09T16:56:35.474Z] Total : 16082.60 62.82 0.00 0.00 0.00 0.00 0.00 00:07:12.433 00:07:12.691 00:07:12.691 Latency(us) 00:07:12.691 [2024-12-09T16:56:35.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.691 Nvme0n1 : 10.01 16079.96 62.81 0.00 0.00 7955.47 4514.70 16602.45 00:07:12.691 [2024-12-09T16:56:35.732Z] =================================================================================================================== 00:07:12.691 [2024-12-09T16:56:35.732Z] Total : 16079.96 62.81 0.00 0.00 7955.47 4514.70 16602.45 00:07:12.691 { 00:07:12.691 "results": [ 00:07:12.691 { 00:07:12.691 "job": "Nvme0n1", 00:07:12.691 "core_mask": "0x2", 00:07:12.691 "workload": "randwrite", 00:07:12.691 "status": "finished", 00:07:12.691 "queue_depth": 128, 00:07:12.691 "io_size": 4096, 00:07:12.691 "runtime": 10.009605, 00:07:12.691 "iops": 16079.955203027492, 00:07:12.691 "mibps": 62.81232501182614, 00:07:12.691 "io_failed": 0, 00:07:12.691 "io_timeout": 0, 00:07:12.691 "avg_latency_us": 7955.474553198775, 00:07:12.691 "min_latency_us": 4514.702222222222, 00:07:12.691 "max_latency_us": 16602.453333333335 00:07:12.691 } 00:07:12.691 ], 00:07:12.691 "core_count": 1 00:07:12.691 } 00:07:12.691 17:56:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1374596 00:07:12.691 17:56:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1374596 ']' 00:07:12.691 17:56:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1374596 00:07:12.691 17:56:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:12.691 17:56:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.691 17:56:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1374596 00:07:12.691 17:56:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:12.691 17:56:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:12.691 17:56:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1374596' 00:07:12.691 killing process with pid 1374596 00:07:12.691 17:56:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1374596 00:07:12.691 Received shutdown signal, test time was about 10.000000 seconds 00:07:12.691 00:07:12.691 Latency(us) 00:07:12.691 [2024-12-09T16:56:35.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.691 [2024-12-09T16:56:35.732Z] =================================================================================================================== 00:07:12.691 [2024-12-09T16:56:35.732Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:12.691 17:56:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1374596 00:07:12.949 17:56:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:13.208 17:56:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:13.466 17:56:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf97972d-a444-457c-92b4-c5f21d78a4d9 00:07:13.466 17:56:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:13.725 17:56:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:13.725 17:56:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:13.725 17:56:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:13.983 [2024-12-09 17:56:36.837380] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:13.983 17:56:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf97972d-a444-457c-92b4-c5f21d78a4d9 00:07:13.983 17:56:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:13.983 17:56:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf97972d-a444-457c-92b4-c5f21d78a4d9 00:07:13.983 17:56:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:13.983 17:56:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.983 17:56:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:13.983 17:56:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.983 17:56:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:13.983 17:56:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.983 17:56:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:13.983 17:56:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:13.983 17:56:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf97972d-a444-457c-92b4-c5f21d78a4d9 00:07:14.242 request: 00:07:14.242 { 00:07:14.242 "uuid": "cf97972d-a444-457c-92b4-c5f21d78a4d9", 00:07:14.242 "method": "bdev_lvol_get_lvstores", 00:07:14.242 "req_id": 1 00:07:14.242 } 00:07:14.242 Got JSON-RPC error response 00:07:14.242 response: 00:07:14.242 { 00:07:14.242 "code": -19, 00:07:14.242 "message": "No such device" 00:07:14.242 } 00:07:14.242 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:14.242 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:14.242 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:14.242 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:14.242 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:14.501 aio_bdev 00:07:14.501 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a8169b8b-2f56-4784-ba47-adaf3d031ec1 00:07:14.501 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=a8169b8b-2f56-4784-ba47-adaf3d031ec1 00:07:14.501 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:14.501 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:14.501 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:14.501 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:14.501 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:14.760 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a8169b8b-2f56-4784-ba47-adaf3d031ec1 -t 2000 00:07:15.018 [ 00:07:15.018 { 00:07:15.018 "name": "a8169b8b-2f56-4784-ba47-adaf3d031ec1", 00:07:15.018 "aliases": [ 00:07:15.018 "lvs/lvol" 00:07:15.018 ], 00:07:15.018 "product_name": "Logical Volume", 00:07:15.018 "block_size": 4096, 00:07:15.018 "num_blocks": 38912, 00:07:15.018 "uuid": "a8169b8b-2f56-4784-ba47-adaf3d031ec1", 00:07:15.018 "assigned_rate_limits": { 00:07:15.018 "rw_ios_per_sec": 0, 00:07:15.018 "rw_mbytes_per_sec": 0, 00:07:15.018 "r_mbytes_per_sec": 0, 00:07:15.018 "w_mbytes_per_sec": 0 00:07:15.018 }, 00:07:15.018 "claimed": false, 00:07:15.018 "zoned": false, 00:07:15.018 "supported_io_types": { 00:07:15.018 "read": true, 00:07:15.018 "write": true, 00:07:15.018 "unmap": true, 00:07:15.018 "flush": false, 00:07:15.018 "reset": true, 00:07:15.018 "nvme_admin": false, 00:07:15.018 "nvme_io": false, 00:07:15.018 "nvme_io_md": false, 00:07:15.018 "write_zeroes": true, 00:07:15.018 "zcopy": false, 00:07:15.018 "get_zone_info": false, 00:07:15.018 "zone_management": false, 00:07:15.018 "zone_append": false, 00:07:15.018 "compare": false, 00:07:15.018 "compare_and_write": false, 00:07:15.018 "abort": false, 00:07:15.018 "seek_hole": true, 00:07:15.018 "seek_data": true, 00:07:15.018 "copy": false, 00:07:15.018 "nvme_iov_md": false 00:07:15.018 }, 00:07:15.018 "driver_specific": { 00:07:15.018 "lvol": { 00:07:15.018 "lvol_store_uuid": "cf97972d-a444-457c-92b4-c5f21d78a4d9", 00:07:15.018 "base_bdev": "aio_bdev", 00:07:15.018 "thin_provision": false, 00:07:15.018 "num_allocated_clusters": 38, 00:07:15.018 "snapshot": false, 00:07:15.018 "clone": false, 00:07:15.018 "esnap_clone": false 00:07:15.018 } 00:07:15.018 } 00:07:15.018 } 00:07:15.018 ] 00:07:15.018 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:15.018 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf97972d-a444-457c-92b4-c5f21d78a4d9 00:07:15.018 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:15.277 17:56:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:15.277 17:56:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf97972d-a444-457c-92b4-c5f21d78a4d9 00:07:15.277 17:56:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:15.535 17:56:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:15.535 17:56:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a8169b8b-2f56-4784-ba47-adaf3d031ec1 00:07:15.793 17:56:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cf97972d-a444-457c-92b4-c5f21d78a4d9 00:07:16.051 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:16.310 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:16.310 00:07:16.310 real 0m17.644s 00:07:16.310 user 0m16.750s 00:07:16.310 sys 0m1.973s 00:07:16.310 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.310 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:16.310 ************************************ 00:07:16.310 END TEST lvs_grow_clean 00:07:16.310 ************************************ 00:07:16.568 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:16.568 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:16.568 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.568 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:16.568 ************************************ 00:07:16.568 START TEST lvs_grow_dirty 00:07:16.568 ************************************ 00:07:16.568 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:16.568 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:16.568 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:16.568 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:16.568 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:16.568 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:16.568 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:16.568 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:16.568 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:16.568 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:16.827 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:16.827 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:17.085 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=37d91533-3a0f-4099-818e-b5521cb5f218 00:07:17.085 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37d91533-3a0f-4099-818e-b5521cb5f218 00:07:17.085 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:17.343 17:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:17.343 17:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:17.343 17:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 37d91533-3a0f-4099-818e-b5521cb5f218 lvol 150 00:07:17.602 17:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e9255ced-2586-47ee-9504-5e55a6d1e972 00:07:17.602 17:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:17.602 17:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:17.895 [2024-12-09 17:56:40.774082] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:17.895 [2024-12-09 17:56:40.774179] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:17.895 true 00:07:17.895 17:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37d91533-3a0f-4099-818e-b5521cb5f218 00:07:17.895 17:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:18.182 17:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:18.182 17:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:18.440 17:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e9255ced-2586-47ee-9504-5e55a6d1e972 00:07:18.699 17:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:18.958 [2024-12-09 17:56:41.865356] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.958 17:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:19.216 17:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1376786 00:07:19.216 17:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:19.216 17:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:19.216 17:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1376786 /var/tmp/bdevperf.sock 00:07:19.216 17:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1376786 ']' 00:07:19.216 17:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:19.216 17:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.216 17:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:19.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:19.216 17:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.216 17:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:19.216 [2024-12-09 17:56:42.182580] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:07:19.216 [2024-12-09 17:56:42.182663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1376786 ] 00:07:19.216 [2024-12-09 17:56:42.248205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.475 [2024-12-09 17:56:42.304718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.475 17:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.475 17:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:19.475 17:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:19.732 Nvme0n1 00:07:19.732 17:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:19.991 [ 00:07:19.991 { 00:07:19.991 "name": "Nvme0n1", 00:07:19.991 "aliases": [ 00:07:19.991 "e9255ced-2586-47ee-9504-5e55a6d1e972" 00:07:19.991 ], 00:07:19.991 "product_name": "NVMe disk", 00:07:19.991 "block_size": 4096, 00:07:19.991 "num_blocks": 38912, 00:07:19.991 "uuid": "e9255ced-2586-47ee-9504-5e55a6d1e972", 00:07:19.991 "numa_id": 0, 00:07:19.991 "assigned_rate_limits": { 00:07:19.991 "rw_ios_per_sec": 0, 00:07:19.991 "rw_mbytes_per_sec": 0, 00:07:19.991 "r_mbytes_per_sec": 0, 00:07:19.991 "w_mbytes_per_sec": 0 00:07:19.991 }, 00:07:19.991 "claimed": false, 00:07:19.991 "zoned": false, 00:07:19.991 "supported_io_types": { 00:07:19.991 "read": true, 00:07:19.991 "write": true, 00:07:19.991 "unmap": true, 00:07:19.991 "flush": true, 00:07:19.991 "reset": true, 00:07:19.991 "nvme_admin": true, 00:07:19.991 "nvme_io": true, 00:07:19.991 "nvme_io_md": false, 00:07:19.991 "write_zeroes": true, 00:07:19.991 "zcopy": false, 00:07:19.991 "get_zone_info": false, 00:07:19.991 "zone_management": false, 00:07:19.991 "zone_append": false, 00:07:19.991 "compare": true, 00:07:19.991 "compare_and_write": true, 00:07:19.991 "abort": true, 00:07:19.991 "seek_hole": false, 00:07:19.991 "seek_data": false, 00:07:19.991 "copy": true, 00:07:19.991 "nvme_iov_md": false 00:07:19.991 }, 00:07:19.991 "memory_domains": [ 00:07:19.991 { 00:07:19.991 "dma_device_id": "system", 00:07:19.991 "dma_device_type": 1 00:07:19.991 } 00:07:19.991 ], 00:07:19.991 "driver_specific": { 00:07:19.991 "nvme": [ 00:07:19.991 { 00:07:19.991 "trid": { 00:07:19.991 "trtype": "TCP", 00:07:19.991 "adrfam": "IPv4", 00:07:19.991 "traddr": "10.0.0.2", 00:07:19.991 "trsvcid": "4420", 00:07:19.991 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:19.991 }, 00:07:19.991 "ctrlr_data": { 00:07:19.991 "cntlid": 1, 00:07:19.991 "vendor_id": "0x8086", 00:07:19.991 "model_number": "SPDK bdev Controller", 00:07:19.991 "serial_number": "SPDK0", 00:07:19.991 "firmware_revision": "25.01", 00:07:19.991 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:19.991 "oacs": { 00:07:19.991 "security": 0, 00:07:19.991 "format": 0, 00:07:19.991 "firmware": 0, 00:07:19.991 "ns_manage": 0 00:07:19.991 }, 00:07:19.991 "multi_ctrlr": true, 00:07:19.991 "ana_reporting": false 00:07:19.991 }, 00:07:19.991 "vs": { 00:07:19.991 "nvme_version": "1.3" 00:07:19.991 }, 00:07:19.991 "ns_data": { 00:07:19.991 "id": 1, 00:07:19.991 "can_share": true 00:07:19.991 } 00:07:19.991 } 00:07:19.991 ], 00:07:19.991 "mp_policy": "active_passive" 00:07:19.991 } 00:07:19.991 } 00:07:19.991 ] 00:07:19.991 17:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1376922 00:07:19.991 17:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:19.991 17:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:20.249 Running I/O for 10 seconds... 00:07:21.183 Latency(us) 00:07:21.183 [2024-12-09T16:56:44.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.183 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.183 Nvme0n1 : 1.00 15213.00 59.43 0.00 0.00 0.00 0.00 0.00 00:07:21.183 [2024-12-09T16:56:44.224Z] =================================================================================================================== 00:07:21.183 [2024-12-09T16:56:44.224Z] Total : 15213.00 59.43 0.00 0.00 0.00 0.00 0.00 00:07:21.183 00:07:22.119 17:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 37d91533-3a0f-4099-818e-b5521cb5f218 00:07:22.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.119 Nvme0n1 : 2.00 15416.00 60.22 0.00 0.00 0.00 0.00 0.00 00:07:22.119 [2024-12-09T16:56:45.160Z] =================================================================================================================== 00:07:22.119 [2024-12-09T16:56:45.160Z] Total : 15416.00 60.22 0.00 0.00 0.00 0.00 0.00 00:07:22.119 00:07:22.377 true 00:07:22.377 17:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37d91533-3a0f-4099-818e-b5521cb5f218 00:07:22.377 17:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:22.635 17:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:22.635 17:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:22.635 17:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1376922 00:07:23.202 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.202 Nvme0n1 : 3.00 15489.33 60.51 0.00 0.00 0.00 0.00 0.00 00:07:23.202 [2024-12-09T16:56:46.243Z] =================================================================================================================== 00:07:23.202 [2024-12-09T16:56:46.243Z] Total : 15489.33 60.51 0.00 0.00 0.00 0.00 0.00 00:07:23.202 00:07:24.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.136 Nvme0n1 : 4.00 15585.75 60.88 0.00 0.00 0.00 0.00 0.00 00:07:24.136 [2024-12-09T16:56:47.177Z] =================================================================================================================== 00:07:24.136 [2024-12-09T16:56:47.177Z] Total : 15585.75 60.88 0.00 0.00 0.00 0.00 0.00 00:07:24.136 00:07:25.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.511 Nvme0n1 : 5.00 15656.60 61.16 0.00 0.00 0.00 0.00 0.00 00:07:25.511 [2024-12-09T16:56:48.552Z] =================================================================================================================== 00:07:25.511 [2024-12-09T16:56:48.552Z] Total : 15656.60 61.16 0.00 0.00 0.00 0.00 0.00 00:07:25.511 00:07:26.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.446 Nvme0n1 : 6.00 15714.17 61.38 0.00 0.00 0.00 0.00 0.00 00:07:26.446 [2024-12-09T16:56:49.487Z] =================================================================================================================== 00:07:26.446 [2024-12-09T16:56:49.487Z] Total : 15714.17 61.38 0.00 0.00 0.00 0.00 0.00 00:07:26.446 00:07:27.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.381 Nvme0n1 : 7.00 15737.14 61.47 0.00 0.00 0.00 0.00 0.00 00:07:27.381 [2024-12-09T16:56:50.422Z] =================================================================================================================== 00:07:27.381 [2024-12-09T16:56:50.422Z] Total : 15737.14 61.47 0.00 0.00 0.00 0.00 0.00 00:07:27.381 00:07:28.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.315 Nvme0n1 : 8.00 15770.25 61.60 0.00 0.00 0.00 0.00 0.00 00:07:28.315 [2024-12-09T16:56:51.356Z] =================================================================================================================== 00:07:28.315 [2024-12-09T16:56:51.356Z] Total : 15770.25 61.60 0.00 0.00 0.00 0.00 0.00 00:07:28.315 00:07:29.250 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.250 Nvme0n1 : 9.00 15796.00 61.70 0.00 0.00 0.00 0.00 0.00 00:07:29.250 [2024-12-09T16:56:52.291Z] =================================================================================================================== 00:07:29.250 [2024-12-09T16:56:52.291Z] Total : 15796.00 61.70 0.00 0.00 0.00 0.00 0.00 00:07:29.250 00:07:30.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.184 Nvme0n1 : 10.00 15829.30 61.83 0.00 0.00 0.00 0.00 0.00 00:07:30.184 [2024-12-09T16:56:53.225Z] =================================================================================================================== 00:07:30.184 [2024-12-09T16:56:53.225Z] Total : 15829.30 61.83 0.00 0.00 0.00 0.00 0.00 00:07:30.184 00:07:30.184 00:07:30.184 Latency(us) 00:07:30.184 [2024-12-09T16:56:53.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.185 Nvme0n1 : 10.01 15831.51 61.84 0.00 0.00 8080.50 4271.98 16311.18 00:07:30.185 [2024-12-09T16:56:53.226Z] =================================================================================================================== 00:07:30.185 [2024-12-09T16:56:53.226Z] Total : 15831.51 61.84 0.00 0.00 8080.50 4271.98 16311.18 00:07:30.185 { 00:07:30.185 "results": [ 00:07:30.185 { 00:07:30.185 "job": "Nvme0n1", 00:07:30.185 "core_mask": "0x2", 00:07:30.185 "workload": "randwrite", 00:07:30.185 "status": "finished", 00:07:30.185 "queue_depth": 128, 00:07:30.185 "io_size": 4096, 00:07:30.185 "runtime": 10.006689, 00:07:30.185 "iops": 15831.510302758485, 00:07:30.185 "mibps": 61.84183712015033, 00:07:30.185 "io_failed": 0, 00:07:30.185 "io_timeout": 0, 00:07:30.185 "avg_latency_us": 8080.502905717465, 00:07:30.185 "min_latency_us": 4271.976296296296, 00:07:30.185 "max_latency_us": 16311.182222222222 00:07:30.185 } 00:07:30.185 ], 00:07:30.185 "core_count": 1 00:07:30.185 } 00:07:30.185 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1376786 00:07:30.185 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1376786 ']' 00:07:30.185 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1376786 00:07:30.185 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:30.185 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.185 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1376786 00:07:30.185 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:30.185 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:30.185 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1376786' 00:07:30.185 killing process with pid 1376786 00:07:30.185 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1376786 00:07:30.185 Received shutdown signal, test time was about 10.000000 seconds 00:07:30.185 00:07:30.185 Latency(us) 00:07:30.185 [2024-12-09T16:56:53.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.185 [2024-12-09T16:56:53.226Z] =================================================================================================================== 00:07:30.185 [2024-12-09T16:56:53.226Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:30.185 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1376786 00:07:30.443 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:30.700 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:30.958 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37d91533-3a0f-4099-818e-b5521cb5f218 00:07:30.958 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:31.217 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:31.217 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:31.217 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1374151 00:07:31.217 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1374151 00:07:31.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1374151 Killed "${NVMF_APP[@]}" "$@" 00:07:31.475 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:31.475 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:31.475 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:31.475 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:31.475 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:31.475 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1378253 00:07:31.475 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:31.475 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1378253 00:07:31.475 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1378253 ']' 00:07:31.475 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.475 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.475 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.475 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.475 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:31.475 [2024-12-09 17:56:54.331748] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:07:31.475 [2024-12-09 17:56:54.331838] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.475 [2024-12-09 17:56:54.405477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.475 [2024-12-09 17:56:54.463534] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.475 [2024-12-09 17:56:54.463613] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.475 [2024-12-09 17:56:54.463628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.475 [2024-12-09 17:56:54.463638] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.475 [2024-12-09 17:56:54.463648] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.475 [2024-12-09 17:56:54.464203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.734 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.734 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:31.734 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:31.734 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:31.734 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:31.734 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.734 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:31.992 [2024-12-09 17:56:54.840377] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:31.992 [2024-12-09 17:56:54.840513] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:31.992 [2024-12-09 17:56:54.840603] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:31.992 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:31.992 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e9255ced-2586-47ee-9504-5e55a6d1e972 00:07:31.992 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e9255ced-2586-47ee-9504-5e55a6d1e972 00:07:31.992 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:31.992 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:31.992 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:31.992 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:31.992 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:32.248 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e9255ced-2586-47ee-9504-5e55a6d1e972 -t 2000 00:07:32.506 [ 00:07:32.506 { 00:07:32.506 "name": "e9255ced-2586-47ee-9504-5e55a6d1e972", 00:07:32.506 "aliases": [ 00:07:32.506 "lvs/lvol" 00:07:32.506 ], 00:07:32.506 "product_name": "Logical Volume", 00:07:32.506 "block_size": 4096, 00:07:32.506 "num_blocks": 38912, 00:07:32.506 "uuid": "e9255ced-2586-47ee-9504-5e55a6d1e972", 00:07:32.506 "assigned_rate_limits": { 00:07:32.506 "rw_ios_per_sec": 0, 00:07:32.506 "rw_mbytes_per_sec": 0, 00:07:32.506 "r_mbytes_per_sec": 0, 00:07:32.506 "w_mbytes_per_sec": 0 00:07:32.506 }, 00:07:32.506 "claimed": false, 00:07:32.506 "zoned": false, 00:07:32.506 "supported_io_types": { 00:07:32.506 "read": true, 00:07:32.506 "write": true, 00:07:32.506 "unmap": true, 00:07:32.506 "flush": false, 00:07:32.506 "reset": true, 00:07:32.506 "nvme_admin": false, 00:07:32.506 "nvme_io": false, 00:07:32.506 "nvme_io_md": false, 00:07:32.506 "write_zeroes": true, 00:07:32.506 "zcopy": false, 00:07:32.506 "get_zone_info": false, 00:07:32.506 "zone_management": false, 00:07:32.506 "zone_append": false, 00:07:32.506 "compare": false, 00:07:32.506 "compare_and_write": false, 00:07:32.506 "abort": false, 00:07:32.506 "seek_hole": true, 00:07:32.506 "seek_data": true, 00:07:32.506 "copy": false, 00:07:32.506 "nvme_iov_md": false 00:07:32.506 }, 00:07:32.506 "driver_specific": { 00:07:32.506 "lvol": { 00:07:32.506 "lvol_store_uuid": "37d91533-3a0f-4099-818e-b5521cb5f218", 00:07:32.506 "base_bdev": "aio_bdev", 00:07:32.506 "thin_provision": false, 00:07:32.506 "num_allocated_clusters": 38, 00:07:32.506 "snapshot": false, 00:07:32.506 "clone": false, 00:07:32.506 "esnap_clone": false 00:07:32.506 } 00:07:32.506 } 00:07:32.506 } 00:07:32.506 ] 00:07:32.506 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:32.506 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37d91533-3a0f-4099-818e-b5521cb5f218 00:07:32.506 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:32.764 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:32.764 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37d91533-3a0f-4099-818e-b5521cb5f218 00:07:32.764 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:33.022 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:33.022 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:33.280 [2024-12-09 17:56:56.214213] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:33.280 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37d91533-3a0f-4099-818e-b5521cb5f218 00:07:33.280 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:33.280 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37d91533-3a0f-4099-818e-b5521cb5f218 00:07:33.280 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:33.280 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.280 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:33.280 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.280 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:33.280 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.280 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:33.280 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:33.280 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37d91533-3a0f-4099-818e-b5521cb5f218 00:07:33.537 request: 00:07:33.537 { 00:07:33.537 "uuid": "37d91533-3a0f-4099-818e-b5521cb5f218", 00:07:33.537 "method": "bdev_lvol_get_lvstores", 00:07:33.537 "req_id": 1 00:07:33.537 } 00:07:33.537 Got JSON-RPC error response 00:07:33.537 response: 00:07:33.537 { 00:07:33.537 "code": -19, 00:07:33.537 "message": "No such device" 00:07:33.537 } 00:07:33.537 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:33.537 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:33.537 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:33.537 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:33.537 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:33.795 aio_bdev 00:07:33.795 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e9255ced-2586-47ee-9504-5e55a6d1e972 00:07:33.795 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e9255ced-2586-47ee-9504-5e55a6d1e972 00:07:33.795 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:33.795 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:33.795 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:33.795 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:33.795 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:34.052 17:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e9255ced-2586-47ee-9504-5e55a6d1e972 -t 2000 00:07:34.311 [ 00:07:34.311 { 00:07:34.311 "name": "e9255ced-2586-47ee-9504-5e55a6d1e972", 00:07:34.311 "aliases": [ 00:07:34.311 "lvs/lvol" 00:07:34.311 ], 00:07:34.311 "product_name": "Logical Volume", 00:07:34.311 "block_size": 4096, 00:07:34.311 "num_blocks": 38912, 00:07:34.311 "uuid": "e9255ced-2586-47ee-9504-5e55a6d1e972", 00:07:34.311 "assigned_rate_limits": { 00:07:34.311 "rw_ios_per_sec": 0, 00:07:34.311 "rw_mbytes_per_sec": 0, 00:07:34.311 "r_mbytes_per_sec": 0, 00:07:34.311 "w_mbytes_per_sec": 0 00:07:34.311 }, 00:07:34.311 "claimed": false, 00:07:34.311 "zoned": false, 00:07:34.311 "supported_io_types": { 00:07:34.311 "read": true, 00:07:34.311 "write": true, 00:07:34.311 "unmap": true, 00:07:34.311 "flush": false, 00:07:34.311 "reset": true, 00:07:34.311 "nvme_admin": false, 00:07:34.311 "nvme_io": false, 00:07:34.311 "nvme_io_md": false, 00:07:34.311 "write_zeroes": true, 00:07:34.311 "zcopy": false, 00:07:34.311 "get_zone_info": false, 00:07:34.311 "zone_management": false, 00:07:34.311 "zone_append": false, 00:07:34.311 "compare": false, 00:07:34.311 "compare_and_write": false, 00:07:34.311 "abort": false, 00:07:34.311 "seek_hole": true, 00:07:34.311 "seek_data": true, 00:07:34.311 "copy": false, 00:07:34.311 "nvme_iov_md": false 00:07:34.311 }, 00:07:34.311 "driver_specific": { 00:07:34.311 "lvol": { 00:07:34.311 "lvol_store_uuid": "37d91533-3a0f-4099-818e-b5521cb5f218", 00:07:34.311 "base_bdev": "aio_bdev", 00:07:34.311 "thin_provision": false, 00:07:34.311 "num_allocated_clusters": 38, 00:07:34.311 "snapshot": false, 00:07:34.311 "clone": false, 00:07:34.311 "esnap_clone": false 00:07:34.311 } 00:07:34.311 } 00:07:34.311 } 00:07:34.311 ] 00:07:34.311 17:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:34.311 17:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37d91533-3a0f-4099-818e-b5521cb5f218 00:07:34.311 17:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:34.877 17:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:34.877 17:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37d91533-3a0f-4099-818e-b5521cb5f218 00:07:34.877 17:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:34.877 17:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:34.877 17:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e9255ced-2586-47ee-9504-5e55a6d1e972 00:07:35.135 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 37d91533-3a0f-4099-818e-b5521cb5f218 00:07:35.701 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:35.701 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:35.960 00:07:35.960 real 0m19.377s 00:07:35.960 user 0m49.148s 00:07:35.960 sys 0m4.468s 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:35.960 ************************************ 00:07:35.960 END TEST lvs_grow_dirty 00:07:35.960 ************************************ 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:35.960 nvmf_trace.0 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:35.960 rmmod nvme_tcp 00:07:35.960 rmmod nvme_fabrics 00:07:35.960 rmmod nvme_keyring 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1378253 ']' 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1378253 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1378253 ']' 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1378253 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1378253 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1378253' 00:07:35.960 killing process with pid 1378253 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1378253 00:07:35.960 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1378253 00:07:36.220 17:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:36.220 17:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:36.220 17:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:36.220 17:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:36.220 17:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:36.220 17:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:36.220 17:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:36.220 17:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:36.220 17:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:36.220 17:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.220 17:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:36.220 17:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.755 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:38.755 00:07:38.755 real 0m42.619s 00:07:38.755 user 1m11.948s 00:07:38.755 sys 0m8.410s 00:07:38.755 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.755 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:38.755 ************************************ 00:07:38.755 END TEST nvmf_lvs_grow 00:07:38.755 ************************************ 00:07:38.755 17:57:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:38.755 17:57:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:38.755 17:57:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.755 17:57:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:38.755 ************************************ 00:07:38.755 START TEST nvmf_bdev_io_wait 00:07:38.755 ************************************ 00:07:38.755 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:38.755 * Looking for test storage... 00:07:38.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:38.755 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:38.755 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:38.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.756 --rc genhtml_branch_coverage=1 00:07:38.756 --rc genhtml_function_coverage=1 00:07:38.756 --rc genhtml_legend=1 00:07:38.756 --rc geninfo_all_blocks=1 00:07:38.756 --rc geninfo_unexecuted_blocks=1 00:07:38.756 00:07:38.756 ' 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:38.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.756 --rc genhtml_branch_coverage=1 00:07:38.756 --rc genhtml_function_coverage=1 00:07:38.756 --rc genhtml_legend=1 00:07:38.756 --rc geninfo_all_blocks=1 00:07:38.756 --rc geninfo_unexecuted_blocks=1 00:07:38.756 00:07:38.756 ' 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:38.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.756 --rc genhtml_branch_coverage=1 00:07:38.756 --rc genhtml_function_coverage=1 00:07:38.756 --rc genhtml_legend=1 00:07:38.756 --rc geninfo_all_blocks=1 00:07:38.756 --rc geninfo_unexecuted_blocks=1 00:07:38.756 00:07:38.756 ' 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:38.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.756 --rc genhtml_branch_coverage=1 00:07:38.756 --rc genhtml_function_coverage=1 00:07:38.756 --rc genhtml_legend=1 00:07:38.756 --rc geninfo_all_blocks=1 00:07:38.756 --rc geninfo_unexecuted_blocks=1 00:07:38.756 00:07:38.756 ' 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:38.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:38.756 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:38.757 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:38.757 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.757 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.757 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.757 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:38.757 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:38.757 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:38.757 17:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:40.664 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:40.664 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:40.664 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:40.664 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:40.664 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:40.665 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:40.665 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:40.665 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:40.665 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:40.665 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:40.665 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:40.665 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:40.665 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:40.665 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:40.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:40.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:07:40.665 00:07:40.665 --- 10.0.0.2 ping statistics --- 00:07:40.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.665 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:07:40.665 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:40.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:40.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:07:40.665 00:07:40.665 --- 10.0.0.1 ping statistics --- 00:07:40.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.665 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:07:40.665 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:40.665 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:40.665 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:40.665 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:40.665 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:40.665 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:40.665 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:40.923 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:40.923 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:40.923 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:40.924 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:40.924 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:40.924 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:40.924 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1380906 00:07:40.924 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1380906 00:07:40.924 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1380906 ']' 00:07:40.924 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:40.924 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.924 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.924 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.924 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.924 17:57:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:40.924 [2024-12-09 17:57:03.782990] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:07:40.924 [2024-12-09 17:57:03.783084] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.924 [2024-12-09 17:57:03.862163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.924 [2024-12-09 17:57:03.925027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.924 [2024-12-09 17:57:03.925076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.924 [2024-12-09 17:57:03.925106] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:40.924 [2024-12-09 17:57:03.925117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:40.924 [2024-12-09 17:57:03.925132] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.924 [2024-12-09 17:57:03.926776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.924 [2024-12-09 17:57:03.926806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.924 [2024-12-09 17:57:03.926877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.924 [2024-12-09 17:57:03.926881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.182 [2024-12-09 17:57:04.139929] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.182 Malloc0 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.182 [2024-12-09 17:57:04.192681] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1380954 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1380956 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1380958 00:07:41.182 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:41.182 { 00:07:41.182 "params": { 00:07:41.182 "name": "Nvme$subsystem", 00:07:41.182 "trtype": "$TEST_TRANSPORT", 00:07:41.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:41.183 "adrfam": "ipv4", 00:07:41.183 "trsvcid": "$NVMF_PORT", 00:07:41.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:41.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:41.183 "hdgst": ${hdgst:-false}, 00:07:41.183 "ddgst": ${ddgst:-false} 00:07:41.183 }, 00:07:41.183 "method": "bdev_nvme_attach_controller" 00:07:41.183 } 00:07:41.183 EOF 00:07:41.183 )") 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1380960 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:41.183 { 00:07:41.183 "params": { 00:07:41.183 "name": "Nvme$subsystem", 00:07:41.183 "trtype": "$TEST_TRANSPORT", 00:07:41.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:41.183 "adrfam": "ipv4", 00:07:41.183 "trsvcid": "$NVMF_PORT", 00:07:41.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:41.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:41.183 "hdgst": ${hdgst:-false}, 00:07:41.183 "ddgst": ${ddgst:-false} 00:07:41.183 }, 00:07:41.183 "method": "bdev_nvme_attach_controller" 00:07:41.183 } 00:07:41.183 EOF 00:07:41.183 )") 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:41.183 { 00:07:41.183 "params": { 00:07:41.183 "name": "Nvme$subsystem", 00:07:41.183 "trtype": "$TEST_TRANSPORT", 00:07:41.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:41.183 "adrfam": "ipv4", 00:07:41.183 "trsvcid": "$NVMF_PORT", 00:07:41.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:41.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:41.183 "hdgst": ${hdgst:-false}, 00:07:41.183 "ddgst": ${ddgst:-false} 00:07:41.183 }, 00:07:41.183 "method": "bdev_nvme_attach_controller" 00:07:41.183 } 00:07:41.183 EOF 00:07:41.183 )") 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:41.183 { 00:07:41.183 "params": { 00:07:41.183 "name": "Nvme$subsystem", 00:07:41.183 "trtype": "$TEST_TRANSPORT", 00:07:41.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:41.183 "adrfam": "ipv4", 00:07:41.183 "trsvcid": "$NVMF_PORT", 00:07:41.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:41.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:41.183 "hdgst": ${hdgst:-false}, 00:07:41.183 "ddgst": ${ddgst:-false} 00:07:41.183 }, 00:07:41.183 "method": "bdev_nvme_attach_controller" 00:07:41.183 } 00:07:41.183 EOF 00:07:41.183 )") 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1380954 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:41.183 "params": { 00:07:41.183 "name": "Nvme1", 00:07:41.183 "trtype": "tcp", 00:07:41.183 "traddr": "10.0.0.2", 00:07:41.183 "adrfam": "ipv4", 00:07:41.183 "trsvcid": "4420", 00:07:41.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:41.183 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:41.183 "hdgst": false, 00:07:41.183 "ddgst": false 00:07:41.183 }, 00:07:41.183 "method": "bdev_nvme_attach_controller" 00:07:41.183 }' 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:41.183 "params": { 00:07:41.183 "name": "Nvme1", 00:07:41.183 "trtype": "tcp", 00:07:41.183 "traddr": "10.0.0.2", 00:07:41.183 "adrfam": "ipv4", 00:07:41.183 "trsvcid": "4420", 00:07:41.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:41.183 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:41.183 "hdgst": false, 00:07:41.183 "ddgst": false 00:07:41.183 }, 00:07:41.183 "method": "bdev_nvme_attach_controller" 00:07:41.183 }' 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:41.183 "params": { 00:07:41.183 "name": "Nvme1", 00:07:41.183 "trtype": "tcp", 00:07:41.183 "traddr": "10.0.0.2", 00:07:41.183 "adrfam": "ipv4", 00:07:41.183 "trsvcid": "4420", 00:07:41.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:41.183 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:41.183 "hdgst": false, 00:07:41.183 "ddgst": false 00:07:41.183 }, 00:07:41.183 "method": "bdev_nvme_attach_controller" 00:07:41.183 }' 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:41.183 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:41.183 "params": { 00:07:41.183 "name": "Nvme1", 00:07:41.183 "trtype": "tcp", 00:07:41.183 "traddr": "10.0.0.2", 00:07:41.183 "adrfam": "ipv4", 00:07:41.183 "trsvcid": "4420", 00:07:41.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:41.183 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:41.183 "hdgst": false, 00:07:41.183 "ddgst": false 00:07:41.183 }, 00:07:41.183 "method": "bdev_nvme_attach_controller" 00:07:41.183 }' 00:07:41.441 [2024-12-09 17:57:04.243637] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:07:41.441 [2024-12-09 17:57:04.243639] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:07:41.441 [2024-12-09 17:57:04.243639] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:07:41.441 [2024-12-09 17:57:04.243638] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:07:41.441 [2024-12-09 17:57:04.243724] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 17:57:04.243725] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 17:57:04.243726] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 17:57:04.243727] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:41.441 --proc-type=auto ] 00:07:41.441 --proc-type=auto ] 00:07:41.441 --proc-type=auto ] 00:07:41.441 [2024-12-09 17:57:04.433371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.699 [2024-12-09 17:57:04.488149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:41.699 [2024-12-09 17:57:04.533630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.699 [2024-12-09 17:57:04.585690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:41.699 [2024-12-09 17:57:04.600843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.699 [2024-12-09 17:57:04.650834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:41.699 [2024-12-09 17:57:04.665384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.699 [2024-12-09 17:57:04.714868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:41.957 Running I/O for 1 seconds... 00:07:41.957 Running I/O for 1 seconds... 00:07:41.957 Running I/O for 1 seconds... 00:07:41.957 Running I/O for 1 seconds... 00:07:42.890 6128.00 IOPS, 23.94 MiB/s [2024-12-09T16:57:05.931Z] 177304.00 IOPS, 692.59 MiB/s 00:07:42.890 Latency(us) 00:07:42.890 [2024-12-09T16:57:05.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.890 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:42.890 Nvme1n1 : 1.02 6135.01 23.96 0.00 0.00 20680.63 8980.86 33204.91 00:07:42.890 [2024-12-09T16:57:05.931Z] =================================================================================================================== 00:07:42.890 [2024-12-09T16:57:05.931Z] Total : 6135.01 23.96 0.00 0.00 20680.63 8980.86 33204.91 00:07:42.890 00:07:42.890 Latency(us) 00:07:42.890 [2024-12-09T16:57:05.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.890 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:42.890 Nvme1n1 : 1.00 176965.15 691.27 0.00 0.00 719.35 298.86 1893.26 00:07:42.890 [2024-12-09T16:57:05.931Z] =================================================================================================================== 00:07:42.890 [2024-12-09T16:57:05.931Z] Total : 176965.15 691.27 0.00 0.00 719.35 298.86 1893.26 00:07:42.890 5954.00 IOPS, 23.26 MiB/s 00:07:42.890 Latency(us) 00:07:42.890 [2024-12-09T16:57:05.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.891 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:42.891 Nvme1n1 : 1.01 6058.16 23.66 0.00 0.00 21057.63 4781.70 38641.97 00:07:42.891 [2024-12-09T16:57:05.932Z] =================================================================================================================== 00:07:42.891 [2024-12-09T16:57:05.932Z] Total : 6058.16 23.66 0.00 0.00 21057.63 4781.70 38641.97 00:07:43.149 9072.00 IOPS, 35.44 MiB/s 00:07:43.149 Latency(us) 00:07:43.149 [2024-12-09T16:57:06.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.149 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:43.149 Nvme1n1 : 1.01 9133.38 35.68 0.00 0.00 13953.27 5267.15 23010.42 00:07:43.149 [2024-12-09T16:57:06.190Z] =================================================================================================================== 00:07:43.149 [2024-12-09T16:57:06.190Z] Total : 9133.38 35.68 0.00 0.00 13953.27 5267.15 23010.42 00:07:43.149 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1380956 00:07:43.149 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1380958 00:07:43.149 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1380960 00:07:43.149 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:43.149 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.149 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:43.149 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.149 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:43.149 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:43.149 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:43.149 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:43.149 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:43.149 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:43.149 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:43.149 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:43.149 rmmod nvme_tcp 00:07:43.149 rmmod nvme_fabrics 00:07:43.149 rmmod nvme_keyring 00:07:43.408 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:43.408 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:43.408 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:43.408 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1380906 ']' 00:07:43.408 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1380906 00:07:43.408 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1380906 ']' 00:07:43.408 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1380906 00:07:43.408 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:43.408 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.408 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1380906 00:07:43.408 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.408 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.408 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1380906' 00:07:43.408 killing process with pid 1380906 00:07:43.408 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1380906 00:07:43.408 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1380906 00:07:43.668 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:43.668 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:43.668 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:43.668 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:43.668 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:43.668 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:43.668 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:43.668 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:43.668 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:43.668 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.668 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.668 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.571 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:45.571 00:07:45.571 real 0m7.264s 00:07:45.571 user 0m15.839s 00:07:45.571 sys 0m3.583s 00:07:45.571 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.571 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:45.571 ************************************ 00:07:45.571 END TEST nvmf_bdev_io_wait 00:07:45.571 ************************************ 00:07:45.571 17:57:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:45.571 17:57:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.571 17:57:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.571 17:57:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:45.571 ************************************ 00:07:45.571 START TEST nvmf_queue_depth 00:07:45.571 ************************************ 00:07:45.571 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:45.571 * Looking for test storage... 00:07:45.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.571 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:45.571 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:07:45.571 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:45.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.830 --rc genhtml_branch_coverage=1 00:07:45.830 --rc genhtml_function_coverage=1 00:07:45.830 --rc genhtml_legend=1 00:07:45.830 --rc geninfo_all_blocks=1 00:07:45.830 --rc geninfo_unexecuted_blocks=1 00:07:45.830 00:07:45.830 ' 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:45.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.830 --rc genhtml_branch_coverage=1 00:07:45.830 --rc genhtml_function_coverage=1 00:07:45.830 --rc genhtml_legend=1 00:07:45.830 --rc geninfo_all_blocks=1 00:07:45.830 --rc geninfo_unexecuted_blocks=1 00:07:45.830 00:07:45.830 ' 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:45.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.830 --rc genhtml_branch_coverage=1 00:07:45.830 --rc genhtml_function_coverage=1 00:07:45.830 --rc genhtml_legend=1 00:07:45.830 --rc geninfo_all_blocks=1 00:07:45.830 --rc geninfo_unexecuted_blocks=1 00:07:45.830 00:07:45.830 ' 00:07:45.830 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:45.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.830 --rc genhtml_branch_coverage=1 00:07:45.830 --rc genhtml_function_coverage=1 00:07:45.830 --rc genhtml_legend=1 00:07:45.830 --rc geninfo_all_blocks=1 00:07:45.830 --rc geninfo_unexecuted_blocks=1 00:07:45.830 00:07:45.830 ' 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:45.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:45.831 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:47.733 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:47.734 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:47.734 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:47.734 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:47.734 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:47.734 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:47.993 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:47.993 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:47.993 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:47.993 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:47.993 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:47.993 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:47.993 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:47.993 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:47.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:07:47.993 00:07:47.993 --- 10.0.0.2 ping statistics --- 00:07:47.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.993 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:07:47.993 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:47.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:07:47.993 00:07:47.993 --- 10.0.0.1 ping statistics --- 00:07:47.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.993 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:07:47.993 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.993 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:47.993 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:47.993 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.993 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:47.993 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:47.993 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.993 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:47.993 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:47.993 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:47.993 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:47.993 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.993 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.252 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1383727 00:07:48.252 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:48.252 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1383727 00:07:48.252 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1383727 ']' 00:07:48.252 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.252 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.252 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.252 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.252 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.252 [2024-12-09 17:57:11.085725] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:07:48.252 [2024-12-09 17:57:11.085822] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.252 [2024-12-09 17:57:11.164510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.252 [2024-12-09 17:57:11.222745] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.252 [2024-12-09 17:57:11.222810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.252 [2024-12-09 17:57:11.222839] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.252 [2024-12-09 17:57:11.222850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.252 [2024-12-09 17:57:11.222860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.252 [2024-12-09 17:57:11.223598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.511 [2024-12-09 17:57:11.373504] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.511 Malloc0 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.511 [2024-12-09 17:57:11.422358] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1383818 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1383818 /var/tmp/bdevperf.sock 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1383818 ']' 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.511 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:48.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:48.512 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.512 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.512 [2024-12-09 17:57:11.468668] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:07:48.512 [2024-12-09 17:57:11.468732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1383818 ] 00:07:48.512 [2024-12-09 17:57:11.533642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.798 [2024-12-09 17:57:11.590998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.798 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.798 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:48.798 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:48.798 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.798 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:49.108 NVMe0n1 00:07:49.108 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.108 17:57:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:49.108 Running I/O for 10 seconds... 00:07:51.419 8192.00 IOPS, 32.00 MiB/s [2024-12-09T16:57:15.394Z] 8425.00 IOPS, 32.91 MiB/s [2024-12-09T16:57:16.328Z] 8524.33 IOPS, 33.30 MiB/s [2024-12-09T16:57:17.263Z] 8544.25 IOPS, 33.38 MiB/s [2024-12-09T16:57:18.197Z] 8595.80 IOPS, 33.58 MiB/s [2024-12-09T16:57:19.133Z] 8628.17 IOPS, 33.70 MiB/s [2024-12-09T16:57:20.507Z] 8619.57 IOPS, 33.67 MiB/s [2024-12-09T16:57:21.074Z] 8663.12 IOPS, 33.84 MiB/s [2024-12-09T16:57:22.449Z] 8638.00 IOPS, 33.74 MiB/s [2024-12-09T16:57:22.449Z] 8657.90 IOPS, 33.82 MiB/s 00:07:59.408 Latency(us) 00:07:59.408 [2024-12-09T16:57:22.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.408 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:59.408 Verification LBA range: start 0x0 length 0x4000 00:07:59.408 NVMe0n1 : 10.09 8679.85 33.91 0.00 0.00 117404.99 21942.42 69905.07 00:07:59.408 [2024-12-09T16:57:22.449Z] =================================================================================================================== 00:07:59.408 [2024-12-09T16:57:22.449Z] Total : 8679.85 33.91 0.00 0.00 117404.99 21942.42 69905.07 00:07:59.408 { 00:07:59.408 "results": [ 00:07:59.408 { 00:07:59.408 "job": "NVMe0n1", 00:07:59.408 "core_mask": "0x1", 00:07:59.408 "workload": "verify", 00:07:59.408 "status": "finished", 00:07:59.408 "verify_range": { 00:07:59.408 "start": 0, 00:07:59.408 "length": 16384 00:07:59.408 }, 00:07:59.408 "queue_depth": 1024, 00:07:59.408 "io_size": 4096, 00:07:59.408 "runtime": 10.094644, 00:07:59.408 "iops": 8679.850423650403, 00:07:59.408 "mibps": 33.90566571738439, 00:07:59.408 "io_failed": 0, 00:07:59.408 "io_timeout": 0, 00:07:59.408 "avg_latency_us": 117404.99124089713, 00:07:59.408 "min_latency_us": 21942.423703703702, 00:07:59.408 "max_latency_us": 69905.06666666667 00:07:59.408 } 00:07:59.408 ], 00:07:59.408 "core_count": 1 00:07:59.408 } 00:07:59.408 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1383818 00:07:59.408 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1383818 ']' 00:07:59.408 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1383818 00:07:59.408 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:59.408 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.408 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1383818 00:07:59.408 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.408 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.408 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1383818' 00:07:59.408 killing process with pid 1383818 00:07:59.408 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1383818 00:07:59.408 Received shutdown signal, test time was about 10.000000 seconds 00:07:59.408 00:07:59.408 Latency(us) 00:07:59.408 [2024-12-09T16:57:22.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.408 [2024-12-09T16:57:22.449Z] =================================================================================================================== 00:07:59.408 [2024-12-09T16:57:22.449Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:59.408 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1383818 00:07:59.408 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:59.408 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:59.408 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:59.408 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:59.408 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:59.408 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:59.408 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:59.408 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:59.408 rmmod nvme_tcp 00:07:59.666 rmmod nvme_fabrics 00:07:59.666 rmmod nvme_keyring 00:07:59.666 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:59.666 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:59.666 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:59.666 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1383727 ']' 00:07:59.666 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1383727 00:07:59.666 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1383727 ']' 00:07:59.666 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1383727 00:07:59.666 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:59.666 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.666 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1383727 00:07:59.666 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:59.666 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:59.666 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1383727' 00:07:59.666 killing process with pid 1383727 00:07:59.666 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1383727 00:07:59.666 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1383727 00:07:59.926 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:59.926 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:59.926 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:59.926 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:59.926 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:59.926 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:59.926 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:59.926 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:59.927 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:59.927 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.927 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.927 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.835 17:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:01.835 00:08:01.835 real 0m16.283s 00:08:01.835 user 0m22.899s 00:08:01.835 sys 0m3.014s 00:08:01.835 17:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.835 17:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.835 ************************************ 00:08:01.835 END TEST nvmf_queue_depth 00:08:01.835 ************************************ 00:08:01.835 17:57:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:01.835 17:57:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:01.835 17:57:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.835 17:57:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:02.095 ************************************ 00:08:02.095 START TEST nvmf_target_multipath 00:08:02.095 ************************************ 00:08:02.095 17:57:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:02.095 * Looking for test storage... 00:08:02.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.095 17:57:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:02.095 17:57:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:02.095 17:57:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:02.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.095 --rc genhtml_branch_coverage=1 00:08:02.095 --rc genhtml_function_coverage=1 00:08:02.095 --rc genhtml_legend=1 00:08:02.095 --rc geninfo_all_blocks=1 00:08:02.095 --rc geninfo_unexecuted_blocks=1 00:08:02.095 00:08:02.095 ' 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:02.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.095 --rc genhtml_branch_coverage=1 00:08:02.095 --rc genhtml_function_coverage=1 00:08:02.095 --rc genhtml_legend=1 00:08:02.095 --rc geninfo_all_blocks=1 00:08:02.095 --rc geninfo_unexecuted_blocks=1 00:08:02.095 00:08:02.095 ' 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:02.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.095 --rc genhtml_branch_coverage=1 00:08:02.095 --rc genhtml_function_coverage=1 00:08:02.095 --rc genhtml_legend=1 00:08:02.095 --rc geninfo_all_blocks=1 00:08:02.095 --rc geninfo_unexecuted_blocks=1 00:08:02.095 00:08:02.095 ' 00:08:02.095 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:02.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.095 --rc genhtml_branch_coverage=1 00:08:02.095 --rc genhtml_function_coverage=1 00:08:02.095 --rc genhtml_legend=1 00:08:02.095 --rc geninfo_all_blocks=1 00:08:02.095 --rc geninfo_unexecuted_blocks=1 00:08:02.096 00:08:02.096 ' 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:02.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:02.096 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:04.629 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:04.629 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:04.629 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:04.629 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:04.629 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:04.629 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:04.630 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:04.630 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:04.630 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:04.630 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:04.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:08:04.630 00:08:04.630 --- 10.0.0.2 ping statistics --- 00:08:04.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.630 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:08:04.630 00:08:04.630 --- 10.0.0.1 ping statistics --- 00:08:04.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.630 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:04.630 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:04.631 only one NIC for nvmf test 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:04.631 rmmod nvme_tcp 00:08:04.631 rmmod nvme_fabrics 00:08:04.631 rmmod nvme_keyring 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.631 17:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.538 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:06.538 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:06.538 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:06.538 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:06.538 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:06.538 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:06.538 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:06.538 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:06.538 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:06.538 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:06.538 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:06.538 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:06.538 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:06.538 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:06.538 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:06.538 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:06.538 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:06.538 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:06.538 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:06.538 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:06.539 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:06.539 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:06.539 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.539 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.539 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.539 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:06.539 00:08:06.539 real 0m4.586s 00:08:06.539 user 0m0.910s 00:08:06.539 sys 0m1.702s 00:08:06.539 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.539 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:06.539 ************************************ 00:08:06.539 END TEST nvmf_target_multipath 00:08:06.539 ************************************ 00:08:06.539 17:57:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:06.539 17:57:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:06.539 17:57:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.539 17:57:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:06.539 ************************************ 00:08:06.539 START TEST nvmf_zcopy 00:08:06.539 ************************************ 00:08:06.539 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:06.539 * Looking for test storage... 00:08:06.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:06.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.798 --rc genhtml_branch_coverage=1 00:08:06.798 --rc genhtml_function_coverage=1 00:08:06.798 --rc genhtml_legend=1 00:08:06.798 --rc geninfo_all_blocks=1 00:08:06.798 --rc geninfo_unexecuted_blocks=1 00:08:06.798 00:08:06.798 ' 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:06.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.798 --rc genhtml_branch_coverage=1 00:08:06.798 --rc genhtml_function_coverage=1 00:08:06.798 --rc genhtml_legend=1 00:08:06.798 --rc geninfo_all_blocks=1 00:08:06.798 --rc geninfo_unexecuted_blocks=1 00:08:06.798 00:08:06.798 ' 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:06.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.798 --rc genhtml_branch_coverage=1 00:08:06.798 --rc genhtml_function_coverage=1 00:08:06.798 --rc genhtml_legend=1 00:08:06.798 --rc geninfo_all_blocks=1 00:08:06.798 --rc geninfo_unexecuted_blocks=1 00:08:06.798 00:08:06.798 ' 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:06.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.798 --rc genhtml_branch_coverage=1 00:08:06.798 --rc genhtml_function_coverage=1 00:08:06.798 --rc genhtml_legend=1 00:08:06.798 --rc geninfo_all_blocks=1 00:08:06.798 --rc geninfo_unexecuted_blocks=1 00:08:06.798 00:08:06.798 ' 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.798 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:06.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:06.799 17:57:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:09.333 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:09.333 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:09.333 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:09.333 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:09.333 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:09.334 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:09.334 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:09.334 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:09.334 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:09.334 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.334 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:09.334 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:09.334 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:09.334 17:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:09.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:08:09.334 00:08:09.334 --- 10.0.0.2 ping statistics --- 00:08:09.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.334 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:09.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:08:09.334 00:08:09.334 --- 10.0.0.1 ping statistics --- 00:08:09.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.334 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1389034 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1389034 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1389034 ']' 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.334 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.334 [2024-12-09 17:57:32.157981] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:08:09.334 [2024-12-09 17:57:32.158072] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.334 [2024-12-09 17:57:32.230790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.334 [2024-12-09 17:57:32.284760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.334 [2024-12-09 17:57:32.284820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.334 [2024-12-09 17:57:32.284849] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.334 [2024-12-09 17:57:32.284860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.334 [2024-12-09 17:57:32.284869] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.334 [2024-12-09 17:57:32.285469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.592 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.592 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:09.592 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:09.592 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:09.592 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.592 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.592 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:09.592 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:09.592 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.592 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.592 [2024-12-09 17:57:32.425421] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.592 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.592 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:09.592 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.592 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.592 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.593 [2024-12-09 17:57:32.441674] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.593 malloc0 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:09.593 { 00:08:09.593 "params": { 00:08:09.593 "name": "Nvme$subsystem", 00:08:09.593 "trtype": "$TEST_TRANSPORT", 00:08:09.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:09.593 "adrfam": "ipv4", 00:08:09.593 "trsvcid": "$NVMF_PORT", 00:08:09.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:09.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:09.593 "hdgst": ${hdgst:-false}, 00:08:09.593 "ddgst": ${ddgst:-false} 00:08:09.593 }, 00:08:09.593 "method": "bdev_nvme_attach_controller" 00:08:09.593 } 00:08:09.593 EOF 00:08:09.593 )") 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:09.593 17:57:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:09.593 "params": { 00:08:09.593 "name": "Nvme1", 00:08:09.593 "trtype": "tcp", 00:08:09.593 "traddr": "10.0.0.2", 00:08:09.593 "adrfam": "ipv4", 00:08:09.593 "trsvcid": "4420", 00:08:09.593 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:09.593 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:09.593 "hdgst": false, 00:08:09.593 "ddgst": false 00:08:09.593 }, 00:08:09.593 "method": "bdev_nvme_attach_controller" 00:08:09.593 }' 00:08:09.593 [2024-12-09 17:57:32.518986] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:08:09.593 [2024-12-09 17:57:32.519066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1389062 ] 00:08:09.593 [2024-12-09 17:57:32.586991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.851 [2024-12-09 17:57:32.645798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.851 Running I/O for 10 seconds... 00:08:12.159 5865.00 IOPS, 45.82 MiB/s [2024-12-09T16:57:36.134Z] 5894.00 IOPS, 46.05 MiB/s [2024-12-09T16:57:37.068Z] 5907.33 IOPS, 46.15 MiB/s [2024-12-09T16:57:38.002Z] 5928.00 IOPS, 46.31 MiB/s [2024-12-09T16:57:38.936Z] 5940.80 IOPS, 46.41 MiB/s [2024-12-09T16:57:40.310Z] 5949.17 IOPS, 46.48 MiB/s [2024-12-09T16:57:41.246Z] 5955.57 IOPS, 46.53 MiB/s [2024-12-09T16:57:42.180Z] 5951.25 IOPS, 46.49 MiB/s [2024-12-09T16:57:43.114Z] 5956.11 IOPS, 46.53 MiB/s [2024-12-09T16:57:43.114Z] 5959.90 IOPS, 46.56 MiB/s 00:08:20.073 Latency(us) 00:08:20.073 [2024-12-09T16:57:43.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.073 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:20.073 Verification LBA range: start 0x0 length 0x1000 00:08:20.073 Nvme1n1 : 10.01 5959.94 46.56 0.00 0.00 21417.69 245.76 31651.46 00:08:20.073 [2024-12-09T16:57:43.114Z] =================================================================================================================== 00:08:20.073 [2024-12-09T16:57:43.114Z] Total : 5959.94 46.56 0.00 0.00 21417.69 245.76 31651.46 00:08:20.331 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1390379 00:08:20.331 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:20.331 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.331 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:20.331 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:20.331 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:20.331 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:20.331 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:20.331 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:20.331 { 00:08:20.331 "params": { 00:08:20.331 "name": "Nvme$subsystem", 00:08:20.331 "trtype": "$TEST_TRANSPORT", 00:08:20.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:20.331 "adrfam": "ipv4", 00:08:20.331 "trsvcid": "$NVMF_PORT", 00:08:20.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:20.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:20.331 "hdgst": ${hdgst:-false}, 00:08:20.331 "ddgst": ${ddgst:-false} 00:08:20.331 }, 00:08:20.331 "method": "bdev_nvme_attach_controller" 00:08:20.331 } 00:08:20.331 EOF 00:08:20.331 )") 00:08:20.331 [2024-12-09 17:57:43.122975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.331 [2024-12-09 17:57:43.123017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.331 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:20.331 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:20.331 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:20.331 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:20.331 "params": { 00:08:20.331 "name": "Nvme1", 00:08:20.331 "trtype": "tcp", 00:08:20.331 "traddr": "10.0.0.2", 00:08:20.331 "adrfam": "ipv4", 00:08:20.331 "trsvcid": "4420", 00:08:20.331 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:20.331 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:20.331 "hdgst": false, 00:08:20.331 "ddgst": false 00:08:20.331 }, 00:08:20.331 "method": "bdev_nvme_attach_controller" 00:08:20.331 }' 00:08:20.331 [2024-12-09 17:57:43.130923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.331 [2024-12-09 17:57:43.130946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.331 [2024-12-09 17:57:43.138941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.331 [2024-12-09 17:57:43.138961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.331 [2024-12-09 17:57:43.146943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.331 [2024-12-09 17:57:43.146970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.331 [2024-12-09 17:57:43.154963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.331 [2024-12-09 17:57:43.154982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.331 [2024-12-09 17:57:43.163000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.331 [2024-12-09 17:57:43.163020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.165996] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:08:20.332 [2024-12-09 17:57:43.166070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1390379 ] 00:08:20.332 [2024-12-09 17:57:43.171006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.171026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.179025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.179043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.187047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.187066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.195070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.195089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.203090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.203109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.211116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.211136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.219135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.219155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.227155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.227175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.233947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.332 [2024-12-09 17:57:43.235175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.235194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.243232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.243271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.251248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.251281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.259240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.259259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.267260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.267280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.275282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.275302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.283304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.283324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.291325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.291345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.294294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.332 [2024-12-09 17:57:43.299346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.299365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.307370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.307390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.315428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.315464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.323444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.323482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.331476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.331529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.339496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.339555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.347518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.347577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.355558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.355609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.332 [2024-12-09 17:57:43.363540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.332 [2024-12-09 17:57:43.363568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.590 [2024-12-09 17:57:43.371608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.371647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.379659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.379700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.387651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.387689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.395637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.395660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.403674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.403697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.411689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.411709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.419733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.419758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.427749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.427787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.435774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.435797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.443796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.443818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.451829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.451851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.459853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.459874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.467875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.467909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.475895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.475930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.483915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.483948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.491947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.491969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.499966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.499989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.507984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.508004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.516006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.516025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.524029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.524049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.532047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.532066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.540069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.540089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.548089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.548111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.556111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.556130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.564134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.564153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.572158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.572177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.580197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.580221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.588210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.588231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.596228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.596248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.604250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.604269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.612273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.612292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.620296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.620315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.591 [2024-12-09 17:57:43.628335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.591 [2024-12-09 17:57:43.628359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.636343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.636366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.644369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.644393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.652389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.652424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 Running I/O for 5 seconds... 00:08:20.850 [2024-12-09 17:57:43.663305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.663333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.673422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.673452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.683672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.683700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.694230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.694258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.704834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.704862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.715243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.715272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.725613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.725641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.736209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.736237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.748614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.748641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.760346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.760373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.769127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.769154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.780737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.780764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.792755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.792782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.802350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.802378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.812776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.812803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.823177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.823204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.833605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.833633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.843908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.843936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.854501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.854528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.864726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.864754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.875009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.875036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.850 [2024-12-09 17:57:43.885668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.850 [2024-12-09 17:57:43.885695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:43.896250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:43.896280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:43.908844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:43.908871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:43.919398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:43.919425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:43.929730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:43.929757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:43.940027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:43.940055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:43.950259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:43.950286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:43.960356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:43.960384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:43.970602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:43.970630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:43.980969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:43.980997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:43.991064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:43.991091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:44.001310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:44.001337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:44.011712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:44.011740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:44.022347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:44.022374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:44.032745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:44.032772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:44.043314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:44.043341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:44.053472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:44.053499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:44.063840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:44.063867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:44.074363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:44.074390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:44.084951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:44.084979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:44.095287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:44.095315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:44.105314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:44.105341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:44.115359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:44.115386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:44.125571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:44.125598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:44.135877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:44.135905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.109 [2024-12-09 17:57:44.146243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.109 [2024-12-09 17:57:44.146272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.156703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.156732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.167303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.167331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.177718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.177745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.188259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.188286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.198903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.198930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.211312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.211340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.221901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.221929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.232795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.232822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.245414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.245441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.255796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.255824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.266649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.266676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.279573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.279605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.292442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.292470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.302452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.302479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.312759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.312786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.322952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.322979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.333300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.333338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.343505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.343532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.354472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.354507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.365004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.365031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.375639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.375667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.388133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.388160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-09 17:57:44.398053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-09 17:57:44.398080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.408743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.408771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.420718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.420746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.429598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.429625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.440371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.440398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.450791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.450819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.461346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.461374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.474175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.474203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.484213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.484240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.494690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.494717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.505326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.505353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.515910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.515938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.528631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.528658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.538767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.538794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.549268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.549295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.559835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.559874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.570413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.570441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.581057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.581084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.591373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.591401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.601903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.601930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.612569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.612597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.623166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.623193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.635819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.635847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.647450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.647477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.627 [2024-12-09 17:57:44.656427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.627 [2024-12-09 17:57:44.656454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 12073.00 IOPS, 94.32 MiB/s [2024-12-09T16:57:44.927Z] [2024-12-09 17:57:44.667834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.667862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.680111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.680140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.690029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.690057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.700771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.700798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.711472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.711500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.721730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.721757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.732301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.732328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.743110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.743138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.753246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.753273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.763900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.763937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.776288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.776316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.786099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.786126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.796172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.796199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.806453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.806481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.816979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.817006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.827530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.827566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.838019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.838046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.848340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.848367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.859181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.859209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.871747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.871775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.881305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.881332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.891661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.891688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.902440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.902467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.886 [2024-12-09 17:57:44.913365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.886 [2024-12-09 17:57:44.913392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:44.926131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:44.926159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:44.936527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:44.936564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:44.947078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:44.947106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:44.959448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:44.959475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:44.969491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:44.969518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:44.980230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:44.980257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:44.991018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:44.991046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:45.003257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:45.003284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:45.013025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:45.013052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:45.023372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:45.023399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:45.033900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:45.033928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:45.044986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:45.045013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:45.055616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:45.055644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:45.068041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:45.068069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:45.078145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:45.078173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:45.088474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:45.088503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:45.099321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:45.099348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:45.112801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:45.112844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:45.123094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:45.123122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:45.133967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:45.133995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:45.146406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:45.146434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:45.156442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:45.156470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:45.166946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:45.166974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.145 [2024-12-09 17:57:45.179367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.145 [2024-12-09 17:57:45.179410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.403 [2024-12-09 17:57:45.190009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.403 [2024-12-09 17:57:45.190038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.403 [2024-12-09 17:57:45.200245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.403 [2024-12-09 17:57:45.200273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.403 [2024-12-09 17:57:45.210901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.403 [2024-12-09 17:57:45.210928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.403 [2024-12-09 17:57:45.221433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.403 [2024-12-09 17:57:45.221461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.403 [2024-12-09 17:57:45.232390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.403 [2024-12-09 17:57:45.232418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.403 [2024-12-09 17:57:45.243148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.403 [2024-12-09 17:57:45.243175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.403 [2024-12-09 17:57:45.253944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.403 [2024-12-09 17:57:45.253971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.403 [2024-12-09 17:57:45.264274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.403 [2024-12-09 17:57:45.264302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.403 [2024-12-09 17:57:45.275131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.403 [2024-12-09 17:57:45.275160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.404 [2024-12-09 17:57:45.287726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.404 [2024-12-09 17:57:45.287754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.404 [2024-12-09 17:57:45.297796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.404 [2024-12-09 17:57:45.297824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.404 [2024-12-09 17:57:45.308569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.404 [2024-12-09 17:57:45.308597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.404 [2024-12-09 17:57:45.320816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.404 [2024-12-09 17:57:45.320843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.404 [2024-12-09 17:57:45.330752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.404 [2024-12-09 17:57:45.330780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.404 [2024-12-09 17:57:45.341272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.404 [2024-12-09 17:57:45.341299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.404 [2024-12-09 17:57:45.351391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.404 [2024-12-09 17:57:45.351433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.404 [2024-12-09 17:57:45.361683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.404 [2024-12-09 17:57:45.361711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.404 [2024-12-09 17:57:45.372098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.404 [2024-12-09 17:57:45.372126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.404 [2024-12-09 17:57:45.382558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.404 [2024-12-09 17:57:45.382586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.404 [2024-12-09 17:57:45.392846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.404 [2024-12-09 17:57:45.392874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.404 [2024-12-09 17:57:45.403210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.404 [2024-12-09 17:57:45.403238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.404 [2024-12-09 17:57:45.413523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.404 [2024-12-09 17:57:45.413564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.404 [2024-12-09 17:57:45.424158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.404 [2024-12-09 17:57:45.424186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.404 [2024-12-09 17:57:45.434555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.404 [2024-12-09 17:57:45.434591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.445595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.445629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.456294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.456321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.468802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.468830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.478890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.478918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.489855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.489886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.500436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.500463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.511007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.511048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.521146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.521173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.531527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.531562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.542005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.542032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.552477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.552505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.562791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.562818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.573094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.573122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.583576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.583603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.594156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.594184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.604309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.604336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.614619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.614647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.625003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.625030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.635458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.635485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.645799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.645826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.656071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.656098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 12067.50 IOPS, 94.28 MiB/s [2024-12-09T16:57:45.703Z] [2024-12-09 17:57:45.666262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.666289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.676391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.676418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.686834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.686861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.662 [2024-12-09 17:57:45.697247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.662 [2024-12-09 17:57:45.697275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.708370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.708398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.719096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.719123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.729678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.729705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.742142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.742170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.752019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.752047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.762595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.762621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.772892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.772930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.783342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.783370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.793754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.793782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.804798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.804826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.817481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.817523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.828006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.828034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.838585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.838613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.850842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.850869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.860486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.860513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.872713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.872740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.882574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.882601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.892909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.892936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.903700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.903728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.916873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.916901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.928983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.929010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.938080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.938107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.921 [2024-12-09 17:57:45.949534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.921 [2024-12-09 17:57:45.949572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.181 [2024-12-09 17:57:45.963311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:45.963345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:45.973337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:45.973364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:45.983375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:45.983416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:45.993430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:45.993458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:46.003684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:46.003712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:46.014495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:46.014523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:46.027072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:46.027100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:46.037423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:46.037450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:46.047928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:46.047955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:46.060363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:46.060391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:46.070183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:46.070210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:46.080718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:46.080746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:46.090874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:46.090900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:46.101424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:46.101450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:46.111806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:46.111833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:46.122605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:46.122632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:46.132736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:46.132763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:46.143356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:46.143383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:46.156129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:46.156156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:46.166156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:46.166183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:46.176307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:46.176334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:46.186824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:46.186862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:46.198992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:46.199020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:46.209243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:46.209271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.182 [2024-12-09 17:57:46.219846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.182 [2024-12-09 17:57:46.219874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.479 [2024-12-09 17:57:46.233064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.479 [2024-12-09 17:57:46.233101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.479 [2024-12-09 17:57:46.244642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.479 [2024-12-09 17:57:46.244670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.479 [2024-12-09 17:57:46.255468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.479 [2024-12-09 17:57:46.255496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.479 [2024-12-09 17:57:46.266308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.479 [2024-12-09 17:57:46.266335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.479 [2024-12-09 17:57:46.277232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.480 [2024-12-09 17:57:46.277261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.480 [2024-12-09 17:57:46.287614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.480 [2024-12-09 17:57:46.287642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.480 [2024-12-09 17:57:46.298818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.480 [2024-12-09 17:57:46.298845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.480 [2024-12-09 17:57:46.311728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.480 [2024-12-09 17:57:46.311756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.480 [2024-12-09 17:57:46.321926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.480 [2024-12-09 17:57:46.321953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.480 [2024-12-09 17:57:46.332369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.480 [2024-12-09 17:57:46.332398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.480 [2024-12-09 17:57:46.343134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.480 [2024-12-09 17:57:46.343161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.480 [2024-12-09 17:57:46.356253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.480 [2024-12-09 17:57:46.356284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.480 [2024-12-09 17:57:46.366263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.480 [2024-12-09 17:57:46.366291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.480 [2024-12-09 17:57:46.376661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.480 [2024-12-09 17:57:46.376688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.480 [2024-12-09 17:57:46.387071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.480 [2024-12-09 17:57:46.387099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.480 [2024-12-09 17:57:46.397391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.480 [2024-12-09 17:57:46.397418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.480 [2024-12-09 17:57:46.408602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.480 [2024-12-09 17:57:46.408630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.480 [2024-12-09 17:57:46.421082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.480 [2024-12-09 17:57:46.421110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.480 [2024-12-09 17:57:46.431206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.480 [2024-12-09 17:57:46.431234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.480 [2024-12-09 17:57:46.442300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.480 [2024-12-09 17:57:46.442327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.480 [2024-12-09 17:57:46.454542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.480 [2024-12-09 17:57:46.454579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.480 [2024-12-09 17:57:46.464826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.480 [2024-12-09 17:57:46.464854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.480 [2024-12-09 17:57:46.475693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.480 [2024-12-09 17:57:46.475721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.480 [2024-12-09 17:57:46.490271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.480 [2024-12-09 17:57:46.490307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-09 17:57:46.503385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-09 17:57:46.503415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-09 17:57:46.513779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-09 17:57:46.513807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-09 17:57:46.524687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-09 17:57:46.524721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-09 17:57:46.537223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-09 17:57:46.537252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-09 17:57:46.549865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.549894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.761 [2024-12-09 17:57:46.559255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.559282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.761 [2024-12-09 17:57:46.570936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.570963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.761 [2024-12-09 17:57:46.583709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.583737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.761 [2024-12-09 17:57:46.593872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.593899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.761 [2024-12-09 17:57:46.604360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.604388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.761 [2024-12-09 17:57:46.617050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.617077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.761 [2024-12-09 17:57:46.627249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.627276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.761 [2024-12-09 17:57:46.637958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.637986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.761 [2024-12-09 17:57:46.648833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.648861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.761 [2024-12-09 17:57:46.659596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.659624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.761 12006.33 IOPS, 93.80 MiB/s [2024-12-09T16:57:46.802Z] [2024-12-09 17:57:46.672726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.672753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.761 [2024-12-09 17:57:46.684500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.684528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.761 [2024-12-09 17:57:46.693721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.693748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.761 [2024-12-09 17:57:46.704851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.704878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.761 [2024-12-09 17:57:46.717645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.717673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.761 [2024-12-09 17:57:46.727787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.727815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.761 [2024-12-09 17:57:46.737910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.737937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.761 [2024-12-09 17:57:46.747916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.747943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.761 [2024-12-09 17:57:46.758420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.758447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.761 [2024-12-09 17:57:46.768920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.768946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.761 [2024-12-09 17:57:46.779422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.779450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.761 [2024-12-09 17:57:46.792478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.761 [2024-12-09 17:57:46.792505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.019 [2024-12-09 17:57:46.803151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.019 [2024-12-09 17:57:46.803179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.019 [2024-12-09 17:57:46.813900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.019 [2024-12-09 17:57:46.813938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.019 [2024-12-09 17:57:46.826378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.019 [2024-12-09 17:57:46.826406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.019 [2024-12-09 17:57:46.836532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.019 [2024-12-09 17:57:46.836567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.019 [2024-12-09 17:57:46.846861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.019 [2024-12-09 17:57:46.846888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.019 [2024-12-09 17:57:46.856886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.019 [2024-12-09 17:57:46.856913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.019 [2024-12-09 17:57:46.867416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.019 [2024-12-09 17:57:46.867443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.019 [2024-12-09 17:57:46.878031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.019 [2024-12-09 17:57:46.878058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.019 [2024-12-09 17:57:46.888790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.019 [2024-12-09 17:57:46.888818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.019 [2024-12-09 17:57:46.901683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.019 [2024-12-09 17:57:46.901710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.019 [2024-12-09 17:57:46.913352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.019 [2024-12-09 17:57:46.913379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.019 [2024-12-09 17:57:46.922520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.019 [2024-12-09 17:57:46.922557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.020 [2024-12-09 17:57:46.933911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.020 [2024-12-09 17:57:46.933939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.020 [2024-12-09 17:57:46.946728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.020 [2024-12-09 17:57:46.946755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.020 [2024-12-09 17:57:46.958578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.020 [2024-12-09 17:57:46.958605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.020 [2024-12-09 17:57:46.967691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.020 [2024-12-09 17:57:46.967719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.020 [2024-12-09 17:57:46.978784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.020 [2024-12-09 17:57:46.978811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.020 [2024-12-09 17:57:46.991479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.020 [2024-12-09 17:57:46.991507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.020 [2024-12-09 17:57:47.003273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.020 [2024-12-09 17:57:47.003301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.020 [2024-12-09 17:57:47.013652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.020 [2024-12-09 17:57:47.013681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.020 [2024-12-09 17:57:47.024470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.020 [2024-12-09 17:57:47.024508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.020 [2024-12-09 17:57:47.035128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.020 [2024-12-09 17:57:47.035156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.020 [2024-12-09 17:57:47.045718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.020 [2024-12-09 17:57:47.045745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.020 [2024-12-09 17:57:47.056537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.020 [2024-12-09 17:57:47.056574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.067160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.067189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.077596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.077623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.088135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.088163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.098619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.098647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.109210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.109237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.122733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.122761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.132948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.132975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.143732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.143760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.156136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.156164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.165786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.165813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.176443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.176470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.186955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.186983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.197581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.197609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.207812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.207840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.218369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.218396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.229022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.229062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.239697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.239724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.252393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.252420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.262600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.262628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.272934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.272961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.283402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.283429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.293709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.293736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.304227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.304254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.279 [2024-12-09 17:57:47.314997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.279 [2024-12-09 17:57:47.315025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.325670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.325698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.336462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.336491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.347470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.347499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.360365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.360393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.370509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.370537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.380805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.380832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.391493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.391521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.403951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.403979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.414210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.414238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.424418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.424446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.434773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.434810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.444871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.444899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.455267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.455295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.465769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.465797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.475881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.475909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.486418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.486445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.497003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.497030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.507742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.507769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.520515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.520543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.532623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.532650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.541974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.542000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.553441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.553469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.565876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.565905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.538 [2024-12-09 17:57:47.575474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.538 [2024-12-09 17:57:47.575501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 [2024-12-09 17:57:47.586222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.586249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 [2024-12-09 17:57:47.596464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.596492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 [2024-12-09 17:57:47.606891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.606920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 [2024-12-09 17:57:47.617593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.617620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 [2024-12-09 17:57:47.631080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.631108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 [2024-12-09 17:57:47.641349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.641376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 [2024-12-09 17:57:47.651886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.651913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 [2024-12-09 17:57:47.662713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.662741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 12019.75 IOPS, 93.90 MiB/s [2024-12-09T16:57:47.838Z] [2024-12-09 17:57:47.673313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.673340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 [2024-12-09 17:57:47.685761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.685788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 [2024-12-09 17:57:47.695867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.695894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 [2024-12-09 17:57:47.706440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.706467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 [2024-12-09 17:57:47.717163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.717190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 [2024-12-09 17:57:47.727361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.727389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 [2024-12-09 17:57:47.738072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.738100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 [2024-12-09 17:57:47.750104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.750131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 [2024-12-09 17:57:47.759725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.759752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 [2024-12-09 17:57:47.770597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.770625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 [2024-12-09 17:57:47.783155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.783183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 [2024-12-09 17:57:47.793377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.793404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 [2024-12-09 17:57:47.803734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.803761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 [2024-12-09 17:57:47.814122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.814149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.797 [2024-12-09 17:57:47.824834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.797 [2024-12-09 17:57:47.824862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:47.837450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:47.837478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:47.847731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:47.847759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:47.858370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:47.858398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:47.869140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:47.869167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:47.879560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:47.879587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:47.890033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:47.890061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:47.900881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:47.900909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:47.911912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:47.911940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:47.924298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:47.924325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:47.936936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:47.936963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:47.946314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:47.946342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:47.956675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:47.956702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:47.966710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:47.966737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:47.977074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:47.977101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:47.987729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:47.987756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:48.000161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:48.000189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:48.009532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:48.009568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:48.019834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:48.019861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:48.030499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:48.030527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:48.042876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:48.042903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:48.052678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:48.052705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:48.063124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:48.063152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:48.073662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:48.073689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:48.084540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.056 [2024-12-09 17:57:48.084576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.056 [2024-12-09 17:57:48.095585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.314 [2024-12-09 17:57:48.095613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.314 [2024-12-09 17:57:48.108173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.108201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.315 [2024-12-09 17:57:48.118496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.118523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.315 [2024-12-09 17:57:48.129014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.129042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.315 [2024-12-09 17:57:48.139732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.139760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.315 [2024-12-09 17:57:48.150439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.150466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.315 [2024-12-09 17:57:48.160888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.160915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.315 [2024-12-09 17:57:48.171746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.171773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.315 [2024-12-09 17:57:48.182512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.182539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.315 [2024-12-09 17:57:48.192913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.192940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.315 [2024-12-09 17:57:48.205426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.205453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.315 [2024-12-09 17:57:48.215401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.215427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.315 [2024-12-09 17:57:48.226017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.226044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.315 [2024-12-09 17:57:48.236513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.236540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.315 [2024-12-09 17:57:48.249700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.249742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.315 [2024-12-09 17:57:48.259782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.259809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.315 [2024-12-09 17:57:48.270588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.270615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.315 [2024-12-09 17:57:48.283097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.283124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.315 [2024-12-09 17:57:48.292996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.293023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.315 [2024-12-09 17:57:48.303298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.303327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.315 [2024-12-09 17:57:48.313537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.313575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.315 [2024-12-09 17:57:48.324227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.324254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.315 [2024-12-09 17:57:48.334596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.334624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.315 [2024-12-09 17:57:48.345275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.315 [2024-12-09 17:57:48.345302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.358193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.358222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.368291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.368319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.378463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.378491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.388911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.388938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.399452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.399480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.409819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.409846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.420280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.420306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.431018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.431045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.441697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.441724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.453943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.453981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.463239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.463267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.476100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.476129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.486354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.486381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.496940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.496969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.507333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.507361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.518137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.518164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.532251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.532280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.542271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.542298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.552716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.552744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.563586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.563614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.574156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.574184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.584797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.584826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.595627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.595655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.573 [2024-12-09 17:57:48.607969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.573 [2024-12-09 17:57:48.607997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.618511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.618539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.629232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.629261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.640266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.640293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.651025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.651053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.663474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.663511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 12022.40 IOPS, 93.92 MiB/s [2024-12-09T16:57:48.873Z] [2024-12-09 17:57:48.673168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.673195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 00:08:25.832 Latency(us) 00:08:25.832 [2024-12-09T16:57:48.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.832 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:25.832 Nvme1n1 : 5.01 12027.52 93.97 0.00 0.00 10628.84 4708.88 18350.08 00:08:25.832 [2024-12-09T16:57:48.873Z] =================================================================================================================== 00:08:25.832 [2024-12-09T16:57:48.873Z] Total : 12027.52 93.97 0.00 0.00 10628.84 4708.88 18350.08 00:08:25.832 [2024-12-09 17:57:48.679815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.679855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.687850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.687873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.695867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.695903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.703926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.703976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.711956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.712007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.719974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.720026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.727997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.728047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.736017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.736066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.744040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.744091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.752067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.752119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.760077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.760125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.768112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.768166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.776131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.776181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.784156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.784207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.792170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.792223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.800194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.800243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.808219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.808268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.816241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.816288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.824238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.824278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.832221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.832241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.840245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.840265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.848265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.848285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.856293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.856315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.832 [2024-12-09 17:57:48.864371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.832 [2024-12-09 17:57:48.864416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.091 [2024-12-09 17:57:48.872405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.091 [2024-12-09 17:57:48.872457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.091 [2024-12-09 17:57:48.880379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.091 [2024-12-09 17:57:48.880409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.091 [2024-12-09 17:57:48.888378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.091 [2024-12-09 17:57:48.888400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.091 [2024-12-09 17:57:48.896396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.091 [2024-12-09 17:57:48.896416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1390379) - No such process 00:08:26.091 17:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1390379 00:08:26.091 17:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.091 17:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.091 17:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:26.091 17:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.091 17:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:26.091 17:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.091 17:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:26.091 delay0 00:08:26.091 17:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.091 17:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:26.091 17:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.091 17:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:26.091 17:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.091 17:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:26.091 [2024-12-09 17:57:49.020409] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:32.648 [2024-12-09 17:57:55.134972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b2250 is same with the state(6) to be set 00:08:32.648 Initializing NVMe Controllers 00:08:32.648 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:32.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:32.648 Initialization complete. Launching workers. 00:08:32.648 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 172 00:08:32.648 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 459, failed to submit 33 00:08:32.648 success 293, unsuccessful 166, failed 0 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:32.648 rmmod nvme_tcp 00:08:32.648 rmmod nvme_fabrics 00:08:32.648 rmmod nvme_keyring 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1389034 ']' 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1389034 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1389034 ']' 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1389034 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1389034 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1389034' 00:08:32.648 killing process with pid 1389034 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1389034 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1389034 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.648 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.557 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:34.557 00:08:34.557 real 0m27.993s 00:08:34.557 user 0m41.302s 00:08:34.557 sys 0m8.222s 00:08:34.557 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.557 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:34.557 ************************************ 00:08:34.557 END TEST nvmf_zcopy 00:08:34.557 ************************************ 00:08:34.557 17:57:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:34.557 17:57:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:34.557 17:57:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.557 17:57:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:34.557 ************************************ 00:08:34.557 START TEST nvmf_nmic 00:08:34.557 ************************************ 00:08:34.558 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:34.817 * Looking for test storage... 00:08:34.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:34.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.817 --rc genhtml_branch_coverage=1 00:08:34.817 --rc genhtml_function_coverage=1 00:08:34.817 --rc genhtml_legend=1 00:08:34.817 --rc geninfo_all_blocks=1 00:08:34.817 --rc geninfo_unexecuted_blocks=1 00:08:34.817 00:08:34.817 ' 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:34.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.817 --rc genhtml_branch_coverage=1 00:08:34.817 --rc genhtml_function_coverage=1 00:08:34.817 --rc genhtml_legend=1 00:08:34.817 --rc geninfo_all_blocks=1 00:08:34.817 --rc geninfo_unexecuted_blocks=1 00:08:34.817 00:08:34.817 ' 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:34.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.817 --rc genhtml_branch_coverage=1 00:08:34.817 --rc genhtml_function_coverage=1 00:08:34.817 --rc genhtml_legend=1 00:08:34.817 --rc geninfo_all_blocks=1 00:08:34.817 --rc geninfo_unexecuted_blocks=1 00:08:34.817 00:08:34.817 ' 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:34.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.817 --rc genhtml_branch_coverage=1 00:08:34.817 --rc genhtml_function_coverage=1 00:08:34.817 --rc genhtml_legend=1 00:08:34.817 --rc geninfo_all_blocks=1 00:08:34.817 --rc geninfo_unexecuted_blocks=1 00:08:34.817 00:08:34.817 ' 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.817 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:34.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:34.818 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:37.352 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:37.352 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:37.352 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:37.353 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:37.353 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:37.353 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:37.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:08:37.353 00:08:37.353 --- 10.0.0.2 ping statistics --- 00:08:37.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.353 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:37.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:08:37.353 00:08:37.353 --- 10.0.0.1 ping statistics --- 00:08:37.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.353 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1393780 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1393780 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1393780 ']' 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.353 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:37.353 [2024-12-09 17:58:00.162657] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:08:37.353 [2024-12-09 17:58:00.162746] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.353 [2024-12-09 17:58:00.241732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:37.353 [2024-12-09 17:58:00.304896] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.353 [2024-12-09 17:58:00.304949] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.353 [2024-12-09 17:58:00.304978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.353 [2024-12-09 17:58:00.304989] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.353 [2024-12-09 17:58:00.305000] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.353 [2024-12-09 17:58:00.306619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.353 [2024-12-09 17:58:00.306673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.353 [2024-12-09 17:58:00.306701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.353 [2024-12-09 17:58:00.306705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:37.612 [2024-12-09 17:58:00.446829] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:37.612 Malloc0 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:37.612 [2024-12-09 17:58:00.517176] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:37.612 test case1: single bdev can't be used in multiple subsystems 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:37.612 [2024-12-09 17:58:00.540993] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:37.612 [2024-12-09 17:58:00.541022] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:37.612 [2024-12-09 17:58:00.541052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.612 request: 00:08:37.612 { 00:08:37.612 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:37.612 "namespace": { 00:08:37.612 "bdev_name": "Malloc0", 00:08:37.612 "no_auto_visible": false, 00:08:37.612 "hide_metadata": false 00:08:37.612 }, 00:08:37.612 "method": "nvmf_subsystem_add_ns", 00:08:37.612 "req_id": 1 00:08:37.612 } 00:08:37.612 Got JSON-RPC error response 00:08:37.612 response: 00:08:37.612 { 00:08:37.612 "code": -32602, 00:08:37.612 "message": "Invalid parameters" 00:08:37.612 } 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:37.612 Adding namespace failed - expected result. 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:37.612 test case2: host connect to nvmf target in multiple paths 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:37.612 [2024-12-09 17:58:00.549090] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.612 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:38.177 17:58:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:39.109 17:58:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:39.109 17:58:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:39.109 17:58:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:39.109 17:58:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:39.110 17:58:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:41.007 17:58:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:41.007 17:58:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:41.007 17:58:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:41.007 17:58:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:41.007 17:58:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:41.007 17:58:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:41.007 17:58:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:41.007 [global] 00:08:41.007 thread=1 00:08:41.007 invalidate=1 00:08:41.007 rw=write 00:08:41.007 time_based=1 00:08:41.007 runtime=1 00:08:41.007 ioengine=libaio 00:08:41.007 direct=1 00:08:41.007 bs=4096 00:08:41.007 iodepth=1 00:08:41.007 norandommap=0 00:08:41.007 numjobs=1 00:08:41.007 00:08:41.007 verify_dump=1 00:08:41.007 verify_backlog=512 00:08:41.007 verify_state_save=0 00:08:41.007 do_verify=1 00:08:41.007 verify=crc32c-intel 00:08:41.007 [job0] 00:08:41.007 filename=/dev/nvme0n1 00:08:41.007 Could not set queue depth (nvme0n1) 00:08:41.265 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:41.265 fio-3.35 00:08:41.265 Starting 1 thread 00:08:42.197 00:08:42.197 job0: (groupid=0, jobs=1): err= 0: pid=1394307: Mon Dec 9 17:58:05 2024 00:08:42.197 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:08:42.197 slat (nsec): min=6560, max=46871, avg=13537.05, stdev=5021.75 00:08:42.197 clat (usec): min=179, max=1681, avg=231.50, stdev=38.54 00:08:42.197 lat (usec): min=186, max=1688, avg=245.04, stdev=39.91 00:08:42.197 clat percentiles (usec): 00:08:42.197 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 210], 00:08:42.197 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 237], 00:08:42.197 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 265], 00:08:42.197 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 343], 99.95th=[ 343], 00:08:42.197 | 99.99th=[ 1680] 00:08:42.197 write: IOPS=2402, BW=9610KiB/s (9841kB/s)(9620KiB/1001msec); 0 zone resets 00:08:42.197 slat (usec): min=8, max=31372, avg=30.18, stdev=639.41 00:08:42.197 clat (usec): min=128, max=718, avg=168.67, stdev=26.76 00:08:42.197 lat (usec): min=137, max=31571, avg=198.85, stdev=640.67 00:08:42.197 clat percentiles (usec): 00:08:42.197 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 151], 00:08:42.197 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:08:42.197 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 200], 00:08:42.197 | 99.00th=[ 251], 99.50th=[ 289], 99.90th=[ 351], 99.95th=[ 709], 00:08:42.197 | 99.99th=[ 717] 00:08:42.197 bw ( KiB/s): min= 9288, max= 9288, per=96.65%, avg=9288.00, stdev= 0.00, samples=1 00:08:42.197 iops : min= 2322, max= 2322, avg=2322.00, stdev= 0.00, samples=1 00:08:42.197 lat (usec) : 250=91.51%, 500=8.42%, 750=0.04% 00:08:42.197 lat (msec) : 2=0.02% 00:08:42.197 cpu : usr=5.00%, sys=9.50%, ctx=4456, majf=0, minf=1 00:08:42.197 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:42.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.197 issued rwts: total=2048,2405,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.197 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:42.197 00:08:42.197 Run status group 0 (all jobs): 00:08:42.197 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:08:42.197 WRITE: bw=9610KiB/s (9841kB/s), 9610KiB/s-9610KiB/s (9841kB/s-9841kB/s), io=9620KiB (9851kB), run=1001-1001msec 00:08:42.197 00:08:42.197 Disk stats (read/write): 00:08:42.197 nvme0n1: ios=1967/2048, merge=0/0, ticks=1391/320, in_queue=1711, util=98.90% 00:08:42.197 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:42.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:42.454 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:42.454 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:42.454 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:42.454 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.454 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:42.454 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.454 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:42.454 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:42.454 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:42.454 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:42.454 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:42.454 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:42.454 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:42.454 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.454 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:42.454 rmmod nvme_tcp 00:08:42.454 rmmod nvme_fabrics 00:08:42.454 rmmod nvme_keyring 00:08:42.454 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:42.454 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:42.454 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:42.454 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1393780 ']' 00:08:42.455 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1393780 00:08:42.455 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1393780 ']' 00:08:42.455 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1393780 00:08:42.455 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:42.455 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.455 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1393780 00:08:42.713 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.713 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.713 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1393780' 00:08:42.713 killing process with pid 1393780 00:08:42.713 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1393780 00:08:42.713 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1393780 00:08:42.971 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:42.971 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:42.971 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:42.971 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:42.971 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:42.971 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:42.971 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:42.971 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:42.971 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:42.971 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.971 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.971 17:58:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.879 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:44.879 00:08:44.879 real 0m10.226s 00:08:44.879 user 0m22.836s 00:08:44.879 sys 0m2.628s 00:08:44.879 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.879 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.879 ************************************ 00:08:44.879 END TEST nvmf_nmic 00:08:44.879 ************************************ 00:08:44.879 17:58:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:44.879 17:58:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:44.879 17:58:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.879 17:58:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:44.879 ************************************ 00:08:44.879 START TEST nvmf_fio_target 00:08:44.879 ************************************ 00:08:44.879 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:44.879 * Looking for test storage... 00:08:44.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.879 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:44.879 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:08:44.879 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:45.139 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:45.139 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.139 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.139 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.139 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:45.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.140 --rc genhtml_branch_coverage=1 00:08:45.140 --rc genhtml_function_coverage=1 00:08:45.140 --rc genhtml_legend=1 00:08:45.140 --rc geninfo_all_blocks=1 00:08:45.140 --rc geninfo_unexecuted_blocks=1 00:08:45.140 00:08:45.140 ' 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:45.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.140 --rc genhtml_branch_coverage=1 00:08:45.140 --rc genhtml_function_coverage=1 00:08:45.140 --rc genhtml_legend=1 00:08:45.140 --rc geninfo_all_blocks=1 00:08:45.140 --rc geninfo_unexecuted_blocks=1 00:08:45.140 00:08:45.140 ' 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:45.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.140 --rc genhtml_branch_coverage=1 00:08:45.140 --rc genhtml_function_coverage=1 00:08:45.140 --rc genhtml_legend=1 00:08:45.140 --rc geninfo_all_blocks=1 00:08:45.140 --rc geninfo_unexecuted_blocks=1 00:08:45.140 00:08:45.140 ' 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:45.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.140 --rc genhtml_branch_coverage=1 00:08:45.140 --rc genhtml_function_coverage=1 00:08:45.140 --rc genhtml_legend=1 00:08:45.140 --rc geninfo_all_blocks=1 00:08:45.140 --rc geninfo_unexecuted_blocks=1 00:08:45.140 00:08:45.140 ' 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:45.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:45.140 17:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.140 17:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:45.140 17:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:45.140 17:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:45.140 17:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:45.140 17:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:45.140 17:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.140 17:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:45.140 17:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:45.140 17:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:45.140 17:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.140 17:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.140 17:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.141 17:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:45.141 17:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:45.141 17:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:45.141 17:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.672 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:47.673 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:47.673 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:47.673 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:47.673 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:47.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:08:47.673 00:08:47.673 --- 10.0.0.2 ping statistics --- 00:08:47.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.673 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:08:47.673 00:08:47.673 --- 10.0.0.1 ping statistics --- 00:08:47.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.673 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1396507 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1396507 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1396507 ']' 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.673 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:47.673 [2024-12-09 17:58:10.510161] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:08:47.673 [2024-12-09 17:58:10.510234] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.673 [2024-12-09 17:58:10.583304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:47.673 [2024-12-09 17:58:10.639179] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.673 [2024-12-09 17:58:10.639238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.673 [2024-12-09 17:58:10.639261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.674 [2024-12-09 17:58:10.639271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.674 [2024-12-09 17:58:10.639280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.674 [2024-12-09 17:58:10.640817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.674 [2024-12-09 17:58:10.640944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.674 [2024-12-09 17:58:10.641002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.674 [2024-12-09 17:58:10.641005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.932 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.932 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:47.932 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:47.932 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:47.932 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:47.932 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.932 17:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:48.189 [2024-12-09 17:58:11.035425] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.189 17:58:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:48.447 17:58:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:48.447 17:58:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:48.706 17:58:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:48.706 17:58:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:48.964 17:58:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:48.964 17:58:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:49.222 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:49.222 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:49.787 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:49.788 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:49.788 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:50.354 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:50.354 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:50.354 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:50.354 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:50.919 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:50.919 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:50.919 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:51.484 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:51.484 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:51.484 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.741 [2024-12-09 17:58:14.734078] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.741 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:51.998 17:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:52.255 17:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:53.188 17:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:53.188 17:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:53.188 17:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:53.188 17:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:53.188 17:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:53.188 17:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:08:55.091 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:55.091 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:55.091 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:55.091 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:08:55.091 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:55.091 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:08:55.091 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:55.091 [global] 00:08:55.091 thread=1 00:08:55.091 invalidate=1 00:08:55.091 rw=write 00:08:55.091 time_based=1 00:08:55.091 runtime=1 00:08:55.091 ioengine=libaio 00:08:55.091 direct=1 00:08:55.091 bs=4096 00:08:55.091 iodepth=1 00:08:55.091 norandommap=0 00:08:55.091 numjobs=1 00:08:55.091 00:08:55.091 verify_dump=1 00:08:55.091 verify_backlog=512 00:08:55.091 verify_state_save=0 00:08:55.091 do_verify=1 00:08:55.091 verify=crc32c-intel 00:08:55.091 [job0] 00:08:55.091 filename=/dev/nvme0n1 00:08:55.091 [job1] 00:08:55.091 filename=/dev/nvme0n2 00:08:55.091 [job2] 00:08:55.091 filename=/dev/nvme0n3 00:08:55.091 [job3] 00:08:55.091 filename=/dev/nvme0n4 00:08:55.091 Could not set queue depth (nvme0n1) 00:08:55.091 Could not set queue depth (nvme0n2) 00:08:55.091 Could not set queue depth (nvme0n3) 00:08:55.091 Could not set queue depth (nvme0n4) 00:08:55.348 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:55.348 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:55.348 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:55.348 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:55.348 fio-3.35 00:08:55.348 Starting 4 threads 00:08:56.721 00:08:56.721 job0: (groupid=0, jobs=1): err= 0: pid=1397588: Mon Dec 9 17:58:19 2024 00:08:56.721 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:08:56.721 slat (nsec): min=5342, max=45749, avg=11732.51, stdev=5269.51 00:08:56.721 clat (usec): min=175, max=779, avg=238.52, stdev=53.59 00:08:56.721 lat (usec): min=182, max=790, avg=250.25, stdev=55.97 00:08:56.721 clat percentiles (usec): 00:08:56.721 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 196], 00:08:56.721 | 30.00th=[ 204], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 231], 00:08:56.721 | 70.00th=[ 255], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[ 318], 00:08:56.721 | 99.00th=[ 457], 99.50th=[ 502], 99.90th=[ 676], 99.95th=[ 775], 00:08:56.721 | 99.99th=[ 783] 00:08:56.721 write: IOPS=2286, BW=9147KiB/s (9366kB/s)(9156KiB/1001msec); 0 zone resets 00:08:56.721 slat (nsec): min=7395, max=63225, avg=15971.94, stdev=6970.91 00:08:56.721 clat (usec): min=131, max=744, avg=188.76, stdev=44.26 00:08:56.721 lat (usec): min=140, max=762, avg=204.73, stdev=46.09 00:08:56.721 clat percentiles (usec): 00:08:56.721 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:08:56.721 | 30.00th=[ 165], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 188], 00:08:56.721 | 70.00th=[ 196], 80.00th=[ 217], 90.00th=[ 258], 95.00th=[ 277], 00:08:56.721 | 99.00th=[ 310], 99.50th=[ 334], 99.90th=[ 363], 99.95th=[ 461], 00:08:56.721 | 99.99th=[ 742] 00:08:56.721 bw ( KiB/s): min= 8192, max= 8192, per=48.59%, avg=8192.00, stdev= 0.00, samples=1 00:08:56.721 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:56.721 lat (usec) : 250=78.79%, 500=20.94%, 750=0.23%, 1000=0.05% 00:08:56.721 cpu : usr=4.60%, sys=8.30%, ctx=4339, majf=0, minf=1 00:08:56.721 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:56.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.721 issued rwts: total=2048,2289,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.721 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:56.721 job1: (groupid=0, jobs=1): err= 0: pid=1397589: Mon Dec 9 17:58:19 2024 00:08:56.721 read: IOPS=21, BW=85.6KiB/s (87.7kB/s)(88.0KiB/1028msec) 00:08:56.721 slat (nsec): min=14246, max=50956, avg=22245.00, stdev=10106.68 00:08:56.721 clat (usec): min=40979, max=42043, avg=41869.57, stdev=289.40 00:08:56.721 lat (usec): min=40995, max=42058, avg=41891.81, stdev=290.59 00:08:56.721 clat percentiles (usec): 00:08:56.721 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:08:56.721 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:08:56.721 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:08:56.721 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:56.721 | 99.99th=[42206] 00:08:56.721 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:08:56.721 slat (nsec): min=7168, max=42677, avg=13690.15, stdev=6156.07 00:08:56.721 clat (usec): min=160, max=356, avg=190.51, stdev=18.21 00:08:56.721 lat (usec): min=168, max=372, avg=204.20, stdev=21.00 00:08:56.721 clat percentiles (usec): 00:08:56.721 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 178], 00:08:56.721 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:08:56.721 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 223], 00:08:56.721 | 99.00th=[ 239], 99.50th=[ 247], 99.90th=[ 355], 99.95th=[ 355], 00:08:56.721 | 99.99th=[ 355] 00:08:56.721 bw ( KiB/s): min= 4096, max= 4096, per=24.30%, avg=4096.00, stdev= 0.00, samples=1 00:08:56.721 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:56.721 lat (usec) : 250=95.51%, 500=0.37% 00:08:56.721 lat (msec) : 50=4.12% 00:08:56.721 cpu : usr=0.49%, sys=0.49%, ctx=535, majf=0, minf=1 00:08:56.721 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:56.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.721 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.721 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:56.721 job2: (groupid=0, jobs=1): err= 0: pid=1397590: Mon Dec 9 17:58:19 2024 00:08:56.721 read: IOPS=513, BW=2052KiB/s (2102kB/s)(2112KiB/1029msec) 00:08:56.721 slat (nsec): min=7416, max=49465, avg=16558.45, stdev=5801.05 00:08:56.721 clat (usec): min=235, max=41328, avg=1453.68, stdev=6761.71 00:08:56.721 lat (usec): min=245, max=41347, avg=1470.24, stdev=6762.92 00:08:56.721 clat percentiles (usec): 00:08:56.721 | 1.00th=[ 241], 5.00th=[ 253], 10.00th=[ 260], 20.00th=[ 269], 00:08:56.721 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 302], 00:08:56.721 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 351], 95.00th=[ 363], 00:08:56.721 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:56.721 | 99.99th=[41157] 00:08:56.721 write: IOPS=995, BW=3981KiB/s (4076kB/s)(4096KiB/1029msec); 0 zone resets 00:08:56.721 slat (nsec): min=8525, max=68493, avg=19916.98, stdev=8442.10 00:08:56.721 clat (usec): min=146, max=459, avg=219.16, stdev=42.24 00:08:56.721 lat (usec): min=162, max=500, avg=239.08, stdev=42.76 00:08:56.721 clat percentiles (usec): 00:08:56.721 | 1.00th=[ 157], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:08:56.721 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 206], 60.00th=[ 221], 00:08:56.721 | 70.00th=[ 239], 80.00th=[ 255], 90.00th=[ 277], 95.00th=[ 293], 00:08:56.721 | 99.00th=[ 355], 99.50th=[ 371], 99.90th=[ 424], 99.95th=[ 461], 00:08:56.721 | 99.99th=[ 461] 00:08:56.721 bw ( KiB/s): min= 8192, max= 8192, per=48.59%, avg=8192.00, stdev= 0.00, samples=1 00:08:56.721 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:56.721 lat (usec) : 250=52.06%, 500=46.97% 00:08:56.721 lat (msec) : 50=0.97% 00:08:56.721 cpu : usr=2.82%, sys=2.72%, ctx=1553, majf=0, minf=1 00:08:56.721 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:56.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.721 issued rwts: total=528,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.721 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:56.721 job3: (groupid=0, jobs=1): err= 0: pid=1397591: Mon Dec 9 17:58:19 2024 00:08:56.721 read: IOPS=58, BW=234KiB/s (240kB/s)(236KiB/1008msec) 00:08:56.721 slat (nsec): min=6458, max=49267, avg=14347.47, stdev=8972.58 00:08:56.721 clat (usec): min=258, max=42230, avg=15202.59, stdev=19967.05 00:08:56.721 lat (usec): min=274, max=42244, avg=15216.93, stdev=19972.82 00:08:56.721 clat percentiles (usec): 00:08:56.721 | 1.00th=[ 260], 5.00th=[ 449], 10.00th=[ 465], 20.00th=[ 486], 00:08:56.721 | 30.00th=[ 494], 40.00th=[ 502], 50.00th=[ 515], 60.00th=[ 529], 00:08:56.721 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:08:56.721 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:56.721 | 99.99th=[42206] 00:08:56.721 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:08:56.721 slat (nsec): min=6872, max=46277, avg=13352.71, stdev=6363.86 00:08:56.721 clat (usec): min=164, max=467, avg=198.01, stdev=22.42 00:08:56.721 lat (usec): min=174, max=482, avg=211.37, stdev=25.14 00:08:56.721 clat percentiles (usec): 00:08:56.721 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:08:56.722 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:08:56.722 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 221], 95.00th=[ 231], 00:08:56.722 | 99.00th=[ 265], 99.50th=[ 297], 99.90th=[ 469], 99.95th=[ 469], 00:08:56.722 | 99.99th=[ 469] 00:08:56.722 bw ( KiB/s): min= 4096, max= 4096, per=24.30%, avg=4096.00, stdev= 0.00, samples=1 00:08:56.722 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:56.722 lat (usec) : 250=88.27%, 500=5.08%, 750=2.98% 00:08:56.722 lat (msec) : 50=3.68% 00:08:56.722 cpu : usr=0.30%, sys=0.79%, ctx=572, majf=0, minf=2 00:08:56.722 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:56.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.722 issued rwts: total=59,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.722 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:56.722 00:08:56.722 Run status group 0 (all jobs): 00:08:56.722 READ: bw=10.1MiB/s (10.6MB/s), 85.6KiB/s-8184KiB/s (87.7kB/s-8380kB/s), io=10.4MiB (10.9MB), run=1001-1029msec 00:08:56.722 WRITE: bw=16.5MiB/s (17.3MB/s), 1992KiB/s-9147KiB/s (2040kB/s-9366kB/s), io=16.9MiB (17.8MB), run=1001-1029msec 00:08:56.722 00:08:56.722 Disk stats (read/write): 00:08:56.722 nvme0n1: ios=1665/2048, merge=0/0, ticks=399/370, in_queue=769, util=87.27% 00:08:56.722 nvme0n2: ios=66/512, merge=0/0, ticks=1155/98, in_queue=1253, util=89.94% 00:08:56.722 nvme0n3: ios=551/1024, merge=0/0, ticks=1461/216, in_queue=1677, util=93.75% 00:08:56.722 nvme0n4: ios=112/512, merge=0/0, ticks=824/105, in_queue=929, util=95.80% 00:08:56.722 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:56.722 [global] 00:08:56.722 thread=1 00:08:56.722 invalidate=1 00:08:56.722 rw=randwrite 00:08:56.722 time_based=1 00:08:56.722 runtime=1 00:08:56.722 ioengine=libaio 00:08:56.722 direct=1 00:08:56.722 bs=4096 00:08:56.722 iodepth=1 00:08:56.722 norandommap=0 00:08:56.722 numjobs=1 00:08:56.722 00:08:56.722 verify_dump=1 00:08:56.722 verify_backlog=512 00:08:56.722 verify_state_save=0 00:08:56.722 do_verify=1 00:08:56.722 verify=crc32c-intel 00:08:56.722 [job0] 00:08:56.722 filename=/dev/nvme0n1 00:08:56.722 [job1] 00:08:56.722 filename=/dev/nvme0n2 00:08:56.722 [job2] 00:08:56.722 filename=/dev/nvme0n3 00:08:56.722 [job3] 00:08:56.722 filename=/dev/nvme0n4 00:08:56.722 Could not set queue depth (nvme0n1) 00:08:56.722 Could not set queue depth (nvme0n2) 00:08:56.722 Could not set queue depth (nvme0n3) 00:08:56.722 Could not set queue depth (nvme0n4) 00:08:56.722 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:56.722 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:56.722 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:56.722 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:56.722 fio-3.35 00:08:56.722 Starting 4 threads 00:08:58.163 00:08:58.163 job0: (groupid=0, jobs=1): err= 0: pid=1397817: Mon Dec 9 17:58:20 2024 00:08:58.163 read: IOPS=360, BW=1443KiB/s (1477kB/s)(1444KiB/1001msec) 00:08:58.163 slat (nsec): min=5336, max=62128, avg=19635.40, stdev=11740.35 00:08:58.163 clat (usec): min=174, max=42342, avg=2363.14, stdev=8875.55 00:08:58.163 lat (usec): min=190, max=42377, avg=2382.78, stdev=8876.99 00:08:58.163 clat percentiles (usec): 00:08:58.163 | 1.00th=[ 186], 5.00th=[ 200], 10.00th=[ 210], 20.00th=[ 231], 00:08:58.163 | 30.00th=[ 258], 40.00th=[ 273], 50.00th=[ 302], 60.00th=[ 359], 00:08:58.163 | 70.00th=[ 445], 80.00th=[ 469], 90.00th=[ 506], 95.00th=[ 635], 00:08:58.163 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:08:58.163 | 99.99th=[42206] 00:08:58.163 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:08:58.163 slat (nsec): min=6704, max=53607, avg=16123.04, stdev=7922.67 00:08:58.163 clat (usec): min=134, max=411, avg=248.52, stdev=45.93 00:08:58.163 lat (usec): min=145, max=421, avg=264.64, stdev=44.19 00:08:58.163 clat percentiles (usec): 00:08:58.163 | 1.00th=[ 159], 5.00th=[ 182], 10.00th=[ 194], 20.00th=[ 208], 00:08:58.163 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 245], 60.00th=[ 260], 00:08:58.163 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 310], 95.00th=[ 326], 00:08:58.163 | 99.00th=[ 388], 99.50th=[ 400], 99.90th=[ 412], 99.95th=[ 412], 00:08:58.163 | 99.99th=[ 412] 00:08:58.163 bw ( KiB/s): min= 4096, max= 4096, per=34.33%, avg=4096.00, stdev= 0.00, samples=1 00:08:58.163 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:58.163 lat (usec) : 250=42.73%, 500=52.58%, 750=2.63% 00:08:58.163 lat (msec) : 50=2.06% 00:08:58.163 cpu : usr=1.20%, sys=1.50%, ctx=874, majf=0, minf=1 00:08:58.164 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.164 issued rwts: total=361,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.164 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.164 job1: (groupid=0, jobs=1): err= 0: pid=1397818: Mon Dec 9 17:58:20 2024 00:08:58.164 read: IOPS=298, BW=1192KiB/s (1221kB/s)(1228KiB/1030msec) 00:08:58.164 slat (nsec): min=4893, max=48895, avg=16685.55, stdev=9381.50 00:08:58.164 clat (usec): min=213, max=42138, avg=2999.21, stdev=10243.69 00:08:58.164 lat (usec): min=221, max=42143, avg=3015.90, stdev=10245.57 00:08:58.164 clat percentiles (usec): 00:08:58.164 | 1.00th=[ 219], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 237], 00:08:58.164 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 255], 60.00th=[ 265], 00:08:58.164 | 70.00th=[ 293], 80.00th=[ 449], 90.00th=[ 506], 95.00th=[41157], 00:08:58.164 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:58.164 | 99.99th=[42206] 00:08:58.164 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:08:58.164 slat (nsec): min=6496, max=36884, avg=10769.93, stdev=5578.45 00:08:58.164 clat (usec): min=158, max=339, avg=186.42, stdev=17.84 00:08:58.164 lat (usec): min=167, max=349, avg=197.19, stdev=19.63 00:08:58.164 clat percentiles (usec): 00:08:58.164 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:08:58.164 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:08:58.164 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 210], 00:08:58.164 | 99.00th=[ 235], 99.50th=[ 310], 99.90th=[ 338], 99.95th=[ 338], 00:08:58.164 | 99.99th=[ 338] 00:08:58.164 bw ( KiB/s): min= 4096, max= 4096, per=34.33%, avg=4096.00, stdev= 0.00, samples=1 00:08:58.164 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:58.164 lat (usec) : 250=78.75%, 500=17.46%, 750=0.98%, 1000=0.24% 00:08:58.164 lat (msec) : 2=0.12%, 50=2.44% 00:08:58.164 cpu : usr=0.39%, sys=1.17%, ctx=820, majf=0, minf=1 00:08:58.164 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.164 issued rwts: total=307,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.164 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.164 job2: (groupid=0, jobs=1): err= 0: pid=1397819: Mon Dec 9 17:58:20 2024 00:08:58.164 read: IOPS=430, BW=1720KiB/s (1762kB/s)(1772KiB/1030msec) 00:08:58.164 slat (nsec): min=6365, max=50831, avg=18005.91, stdev=7592.29 00:08:58.164 clat (usec): min=183, max=42126, avg=1984.66, stdev=8164.45 00:08:58.164 lat (usec): min=191, max=42144, avg=2002.66, stdev=8164.58 00:08:58.164 clat percentiles (usec): 00:08:58.164 | 1.00th=[ 200], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 235], 00:08:58.164 | 30.00th=[ 245], 40.00th=[ 265], 50.00th=[ 293], 60.00th=[ 310], 00:08:58.164 | 70.00th=[ 326], 80.00th=[ 396], 90.00th=[ 478], 95.00th=[ 619], 00:08:58.164 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:58.164 | 99.99th=[42206] 00:08:58.164 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:08:58.164 slat (nsec): min=7978, max=56469, avg=18328.91, stdev=9882.49 00:08:58.164 clat (usec): min=157, max=512, avg=248.13, stdev=43.15 00:08:58.164 lat (usec): min=170, max=544, avg=266.46, stdev=41.35 00:08:58.164 clat percentiles (usec): 00:08:58.164 | 1.00th=[ 167], 5.00th=[ 186], 10.00th=[ 196], 20.00th=[ 210], 00:08:58.164 | 30.00th=[ 225], 40.00th=[ 237], 50.00th=[ 247], 60.00th=[ 258], 00:08:58.164 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 326], 00:08:58.164 | 99.00th=[ 371], 99.50th=[ 392], 99.90th=[ 515], 99.95th=[ 515], 00:08:58.164 | 99.99th=[ 515] 00:08:58.164 bw ( KiB/s): min= 4096, max= 4096, per=34.33%, avg=4096.00, stdev= 0.00, samples=1 00:08:58.164 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:58.164 lat (usec) : 250=45.13%, 500=50.89%, 750=2.09% 00:08:58.164 lat (msec) : 50=1.88% 00:08:58.164 cpu : usr=1.46%, sys=1.94%, ctx=956, majf=0, minf=1 00:08:58.164 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.164 issued rwts: total=443,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.164 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.164 job3: (groupid=0, jobs=1): err= 0: pid=1397820: Mon Dec 9 17:58:20 2024 00:08:58.164 read: IOPS=1454, BW=5816KiB/s (5956kB/s)(5828KiB/1002msec) 00:08:58.164 slat (nsec): min=5104, max=63633, avg=16205.17, stdev=8695.84 00:08:58.164 clat (usec): min=172, max=42149, avg=469.25, stdev=2648.03 00:08:58.164 lat (usec): min=183, max=42154, avg=485.46, stdev=2648.57 00:08:58.164 clat percentiles (usec): 00:08:58.164 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 206], 00:08:58.164 | 30.00th=[ 212], 40.00th=[ 229], 50.00th=[ 285], 60.00th=[ 310], 00:08:58.164 | 70.00th=[ 363], 80.00th=[ 404], 90.00th=[ 433], 95.00th=[ 457], 00:08:58.164 | 99.00th=[ 523], 99.50th=[ 627], 99.90th=[42206], 99.95th=[42206], 00:08:58.164 | 99.99th=[42206] 00:08:58.164 write: IOPS=1532, BW=6132KiB/s (6279kB/s)(6144KiB/1002msec); 0 zone resets 00:08:58.164 slat (nsec): min=6451, max=49394, avg=11949.48, stdev=5855.80 00:08:58.164 clat (usec): min=132, max=581, avg=171.12, stdev=28.85 00:08:58.164 lat (usec): min=141, max=601, avg=183.07, stdev=31.11 00:08:58.164 clat percentiles (usec): 00:08:58.164 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:08:58.164 | 30.00th=[ 151], 40.00th=[ 157], 50.00th=[ 167], 60.00th=[ 176], 00:08:58.164 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 212], 00:08:58.164 | 99.00th=[ 255], 99.50th=[ 269], 99.90th=[ 445], 99.95th=[ 586], 00:08:58.164 | 99.99th=[ 586] 00:08:58.164 bw ( KiB/s): min= 4096, max= 8192, per=51.50%, avg=6144.00, stdev=2896.31, samples=2 00:08:58.164 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:08:58.164 lat (usec) : 250=71.67%, 500=27.36%, 750=0.74% 00:08:58.164 lat (msec) : 4=0.03%, 50=0.20% 00:08:58.164 cpu : usr=2.30%, sys=4.60%, ctx=2993, majf=0, minf=2 00:08:58.164 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.164 issued rwts: total=1457,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.164 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.164 00:08:58.164 Run status group 0 (all jobs): 00:08:58.164 READ: bw=9973KiB/s (10.2MB/s), 1192KiB/s-5816KiB/s (1221kB/s-5956kB/s), io=10.0MiB (10.5MB), run=1001-1030msec 00:08:58.164 WRITE: bw=11.7MiB/s (12.2MB/s), 1988KiB/s-6132KiB/s (2036kB/s-6279kB/s), io=12.0MiB (12.6MB), run=1001-1030msec 00:08:58.164 00:08:58.164 Disk stats (read/write): 00:08:58.164 nvme0n1: ios=249/512, merge=0/0, ticks=996/124, in_queue=1120, util=98.20% 00:08:58.164 nvme0n2: ios=352/512, merge=0/0, ticks=1076/88, in_queue=1164, util=98.27% 00:08:58.164 nvme0n3: ios=468/512, merge=0/0, ticks=904/125, in_queue=1029, util=96.46% 00:08:58.164 nvme0n4: ios=1278/1536, merge=0/0, ticks=526/256, in_queue=782, util=89.62% 00:08:58.164 17:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:58.164 [global] 00:08:58.164 thread=1 00:08:58.164 invalidate=1 00:08:58.164 rw=write 00:08:58.164 time_based=1 00:08:58.164 runtime=1 00:08:58.164 ioengine=libaio 00:08:58.164 direct=1 00:08:58.164 bs=4096 00:08:58.164 iodepth=128 00:08:58.164 norandommap=0 00:08:58.164 numjobs=1 00:08:58.164 00:08:58.164 verify_dump=1 00:08:58.164 verify_backlog=512 00:08:58.164 verify_state_save=0 00:08:58.164 do_verify=1 00:08:58.164 verify=crc32c-intel 00:08:58.164 [job0] 00:08:58.164 filename=/dev/nvme0n1 00:08:58.164 [job1] 00:08:58.164 filename=/dev/nvme0n2 00:08:58.164 [job2] 00:08:58.164 filename=/dev/nvme0n3 00:08:58.164 [job3] 00:08:58.164 filename=/dev/nvme0n4 00:08:58.164 Could not set queue depth (nvme0n1) 00:08:58.164 Could not set queue depth (nvme0n2) 00:08:58.164 Could not set queue depth (nvme0n3) 00:08:58.164 Could not set queue depth (nvme0n4) 00:08:58.164 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:58.164 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:58.164 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:58.164 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:58.164 fio-3.35 00:08:58.164 Starting 4 threads 00:08:59.540 00:08:59.540 job0: (groupid=0, jobs=1): err= 0: pid=1398055: Mon Dec 9 17:58:22 2024 00:08:59.540 read: IOPS=4004, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1004msec) 00:08:59.540 slat (usec): min=2, max=48014, avg=141.21, stdev=1137.13 00:08:59.540 clat (usec): min=2174, max=76301, avg=18070.55, stdev=10875.12 00:08:59.540 lat (usec): min=6067, max=76324, avg=18211.75, stdev=10924.40 00:08:59.540 clat percentiles (usec): 00:08:59.540 | 1.00th=[ 8586], 5.00th=[ 9896], 10.00th=[11076], 20.00th=[11731], 00:08:59.540 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12780], 60.00th=[14353], 00:08:59.540 | 70.00th=[19530], 80.00th=[27132], 90.00th=[27395], 95.00th=[31327], 00:08:59.540 | 99.00th=[73925], 99.50th=[76022], 99.90th=[76022], 99.95th=[76022], 00:08:59.540 | 99.99th=[76022] 00:08:59.540 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:08:59.540 slat (usec): min=2, max=7000, avg=99.42, stdev=556.73 00:08:59.540 clat (usec): min=5128, max=56714, avg=13226.19, stdev=4618.35 00:08:59.540 lat (usec): min=5194, max=56724, avg=13325.61, stdev=4622.16 00:08:59.541 clat percentiles (usec): 00:08:59.541 | 1.00th=[ 7439], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[10683], 00:08:59.541 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:08:59.541 | 70.00th=[13304], 80.00th=[15926], 90.00th=[17433], 95.00th=[20055], 00:08:59.541 | 99.00th=[26870], 99.50th=[30016], 99.90th=[56886], 99.95th=[56886], 00:08:59.541 | 99.99th=[56886] 00:08:59.541 bw ( KiB/s): min=15984, max=16784, per=23.43%, avg=16384.00, stdev=565.69, samples=2 00:08:59.541 iops : min= 3996, max= 4196, avg=4096.00, stdev=141.42, samples=2 00:08:59.541 lat (msec) : 4=0.01%, 10=11.65%, 20=71.13%, 50=15.63%, 100=1.56% 00:08:59.541 cpu : usr=2.69%, sys=5.38%, ctx=302, majf=0, minf=1 00:08:59.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:08:59.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:59.541 issued rwts: total=4021,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.541 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:59.541 job1: (groupid=0, jobs=1): err= 0: pid=1398056: Mon Dec 9 17:58:22 2024 00:08:59.541 read: IOPS=4406, BW=17.2MiB/s (18.0MB/s)(17.3MiB/1006msec) 00:08:59.541 slat (usec): min=2, max=9773, avg=100.84, stdev=572.49 00:08:59.541 clat (usec): min=2973, max=39604, avg=13034.25, stdev=4296.99 00:08:59.541 lat (usec): min=8265, max=42564, avg=13135.09, stdev=4331.43 00:08:59.541 clat percentiles (usec): 00:08:59.541 | 1.00th=[ 8586], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[10945], 00:08:59.541 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12125], 60.00th=[12518], 00:08:59.541 | 70.00th=[12780], 80.00th=[13173], 90.00th=[15139], 95.00th=[23200], 00:08:59.541 | 99.00th=[32900], 99.50th=[33162], 99.90th=[34341], 99.95th=[38536], 00:08:59.541 | 99.99th=[39584] 00:08:59.541 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:08:59.541 slat (usec): min=2, max=29399, avg=114.11, stdev=889.30 00:08:59.541 clat (usec): min=7375, max=79268, avg=15048.87, stdev=9528.89 00:08:59.541 lat (usec): min=7718, max=79356, avg=15162.98, stdev=9604.57 00:08:59.541 clat percentiles (usec): 00:08:59.541 | 1.00th=[ 8979], 5.00th=[10290], 10.00th=[10683], 20.00th=[11207], 00:08:59.541 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12256], 60.00th=[12649], 00:08:59.541 | 70.00th=[13173], 80.00th=[15139], 90.00th=[17695], 95.00th=[37487], 00:08:59.541 | 99.00th=[62129], 99.50th=[63177], 99.90th=[63177], 99.95th=[66323], 00:08:59.541 | 99.99th=[79168] 00:08:59.541 bw ( KiB/s): min=16384, max=20480, per=26.36%, avg=18432.00, stdev=2896.31, samples=2 00:08:59.541 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:08:59.541 lat (msec) : 4=0.01%, 10=6.09%, 20=86.46%, 50=6.01%, 100=1.43% 00:08:59.541 cpu : usr=3.58%, sys=5.27%, ctx=425, majf=0, minf=1 00:08:59.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:08:59.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:59.541 issued rwts: total=4433,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.541 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:59.541 job2: (groupid=0, jobs=1): err= 0: pid=1398057: Mon Dec 9 17:58:22 2024 00:08:59.541 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:08:59.541 slat (usec): min=2, max=12772, avg=125.07, stdev=782.33 00:08:59.541 clat (usec): min=8369, max=30283, avg=16302.20, stdev=3270.20 00:08:59.541 lat (usec): min=8375, max=30297, avg=16427.27, stdev=3334.00 00:08:59.541 clat percentiles (usec): 00:08:59.541 | 1.00th=[10814], 5.00th=[12387], 10.00th=[12780], 20.00th=[13435], 00:08:59.541 | 30.00th=[14091], 40.00th=[14484], 50.00th=[15139], 60.00th=[16581], 00:08:59.541 | 70.00th=[18220], 80.00th=[20055], 90.00th=[20579], 95.00th=[21890], 00:08:59.541 | 99.00th=[23200], 99.50th=[24773], 99.90th=[27132], 99.95th=[27132], 00:08:59.541 | 99.99th=[30278] 00:08:59.541 write: IOPS=3891, BW=15.2MiB/s (15.9MB/s)(15.4MiB/1011msec); 0 zone resets 00:08:59.541 slat (usec): min=3, max=13028, avg=133.62, stdev=759.23 00:08:59.541 clat (usec): min=5068, max=66872, avg=17660.08, stdev=8868.77 00:08:59.541 lat (usec): min=5085, max=66891, avg=17793.71, stdev=8931.10 00:08:59.541 clat percentiles (usec): 00:08:59.541 | 1.00th=[ 8291], 5.00th=[10159], 10.00th=[12518], 20.00th=[13042], 00:08:59.541 | 30.00th=[13304], 40.00th=[13698], 50.00th=[14222], 60.00th=[14877], 00:08:59.541 | 70.00th=[17171], 80.00th=[19006], 90.00th=[31065], 95.00th=[35390], 00:08:59.541 | 99.00th=[53216], 99.50th=[57410], 99.90th=[66847], 99.95th=[66847], 00:08:59.541 | 99.99th=[66847] 00:08:59.541 bw ( KiB/s): min=14064, max=16384, per=21.77%, avg=15224.00, stdev=1640.49, samples=2 00:08:59.541 iops : min= 3516, max= 4096, avg=3806.00, stdev=410.12, samples=2 00:08:59.541 lat (msec) : 10=2.71%, 20=78.47%, 50=17.96%, 100=0.86% 00:08:59.541 cpu : usr=4.46%, sys=4.06%, ctx=303, majf=0, minf=2 00:08:59.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:08:59.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:59.541 issued rwts: total=3584,3934,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.541 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:59.541 job3: (groupid=0, jobs=1): err= 0: pid=1398058: Mon Dec 9 17:58:22 2024 00:08:59.541 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:08:59.541 slat (usec): min=3, max=4651, avg=101.01, stdev=540.15 00:08:59.541 clat (usec): min=8998, max=19346, avg=13350.95, stdev=1399.02 00:08:59.541 lat (usec): min=9026, max=19359, avg=13451.96, stdev=1450.43 00:08:59.541 clat percentiles (usec): 00:08:59.541 | 1.00th=[ 9765], 5.00th=[10683], 10.00th=[11600], 20.00th=[12387], 00:08:59.541 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13435], 60.00th=[13829], 00:08:59.541 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14877], 95.00th=[15270], 00:08:59.541 | 99.00th=[17433], 99.50th=[17695], 99.90th=[19006], 99.95th=[19268], 00:08:59.541 | 99.99th=[19268] 00:08:59.541 write: IOPS=5008, BW=19.6MiB/s (20.5MB/s)(19.7MiB/1005msec); 0 zone resets 00:08:59.541 slat (usec): min=3, max=4411, avg=96.66, stdev=459.56 00:08:59.541 clat (usec): min=3344, max=18249, avg=13014.36, stdev=1396.50 00:08:59.541 lat (usec): min=3986, max=18265, avg=13111.03, stdev=1435.92 00:08:59.541 clat percentiles (usec): 00:08:59.541 | 1.00th=[ 8586], 5.00th=[10945], 10.00th=[11994], 20.00th=[12256], 00:08:59.541 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12911], 60.00th=[13304], 00:08:59.541 | 70.00th=[13698], 80.00th=[13829], 90.00th=[14484], 95.00th=[15008], 00:08:59.541 | 99.00th=[16712], 99.50th=[17433], 99.90th=[18220], 99.95th=[18220], 00:08:59.541 | 99.99th=[18220] 00:08:59.541 bw ( KiB/s): min=18984, max=20272, per=28.07%, avg=19628.00, stdev=910.75, samples=2 00:08:59.541 iops : min= 4746, max= 5068, avg=4907.00, stdev=227.69, samples=2 00:08:59.541 lat (msec) : 4=0.03%, 10=2.08%, 20=97.88% 00:08:59.541 cpu : usr=6.08%, sys=9.96%, ctx=502, majf=0, minf=1 00:08:59.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:08:59.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:59.541 issued rwts: total=4608,5034,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.541 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:59.541 00:08:59.541 Run status group 0 (all jobs): 00:08:59.541 READ: bw=64.3MiB/s (67.4MB/s), 13.8MiB/s-17.9MiB/s (14.5MB/s-18.8MB/s), io=65.0MiB (68.2MB), run=1004-1011msec 00:08:59.541 WRITE: bw=68.3MiB/s (71.6MB/s), 15.2MiB/s-19.6MiB/s (15.9MB/s-20.5MB/s), io=69.0MiB (72.4MB), run=1004-1011msec 00:08:59.541 00:08:59.541 Disk stats (read/write): 00:08:59.541 nvme0n1: ios=3215/3584, merge=0/0, ticks=16696/12995, in_queue=29691, util=87.17% 00:08:59.541 nvme0n2: ios=4139/4267, merge=0/0, ticks=14968/17269, in_queue=32237, util=95.73% 00:08:59.541 nvme0n3: ios=3129/3576, merge=0/0, ticks=17633/23437, in_queue=41070, util=90.39% 00:08:59.541 nvme0n4: ios=3970/4096, merge=0/0, ticks=17233/16675, in_queue=33908, util=95.15% 00:08:59.541 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:59.541 [global] 00:08:59.541 thread=1 00:08:59.541 invalidate=1 00:08:59.541 rw=randwrite 00:08:59.541 time_based=1 00:08:59.541 runtime=1 00:08:59.541 ioengine=libaio 00:08:59.541 direct=1 00:08:59.541 bs=4096 00:08:59.541 iodepth=128 00:08:59.541 norandommap=0 00:08:59.541 numjobs=1 00:08:59.541 00:08:59.541 verify_dump=1 00:08:59.541 verify_backlog=512 00:08:59.541 verify_state_save=0 00:08:59.541 do_verify=1 00:08:59.541 verify=crc32c-intel 00:08:59.541 [job0] 00:08:59.541 filename=/dev/nvme0n1 00:08:59.541 [job1] 00:08:59.541 filename=/dev/nvme0n2 00:08:59.541 [job2] 00:08:59.541 filename=/dev/nvme0n3 00:08:59.541 [job3] 00:08:59.541 filename=/dev/nvme0n4 00:08:59.541 Could not set queue depth (nvme0n1) 00:08:59.541 Could not set queue depth (nvme0n2) 00:08:59.541 Could not set queue depth (nvme0n3) 00:08:59.541 Could not set queue depth (nvme0n4) 00:08:59.541 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:59.541 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:59.541 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:59.541 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:59.541 fio-3.35 00:08:59.541 Starting 4 threads 00:09:00.946 00:09:00.946 job0: (groupid=0, jobs=1): err= 0: pid=1398360: Mon Dec 9 17:58:23 2024 00:09:00.946 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:09:00.946 slat (usec): min=2, max=17373, avg=104.64, stdev=790.14 00:09:00.946 clat (usec): min=5750, max=46492, avg=13433.32, stdev=5347.35 00:09:00.946 lat (usec): min=5765, max=46496, avg=13537.96, stdev=5398.29 00:09:00.946 clat percentiles (usec): 00:09:00.946 | 1.00th=[ 7504], 5.00th=[ 8225], 10.00th=[ 9372], 20.00th=[10028], 00:09:00.946 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11994], 60.00th=[12518], 00:09:00.946 | 70.00th=[14877], 80.00th=[15533], 90.00th=[18220], 95.00th=[23462], 00:09:00.946 | 99.00th=[37487], 99.50th=[45351], 99.90th=[46400], 99.95th=[46400], 00:09:00.946 | 99.99th=[46400] 00:09:00.946 write: IOPS=4484, BW=17.5MiB/s (18.4MB/s)(17.7MiB/1011msec); 0 zone resets 00:09:00.946 slat (usec): min=3, max=19674, avg=108.06, stdev=699.91 00:09:00.946 clat (usec): min=807, max=46491, avg=16170.32, stdev=6436.17 00:09:00.946 lat (usec): min=824, max=46512, avg=16278.38, stdev=6489.81 00:09:00.946 clat percentiles (usec): 00:09:00.946 | 1.00th=[ 4752], 5.00th=[ 7439], 10.00th=[ 8291], 20.00th=[10945], 00:09:00.946 | 30.00th=[11863], 40.00th=[13960], 50.00th=[16057], 60.00th=[18220], 00:09:00.946 | 70.00th=[18744], 80.00th=[19530], 90.00th=[23462], 95.00th=[29492], 00:09:00.946 | 99.00th=[35390], 99.50th=[35390], 99.90th=[36439], 99.95th=[36439], 00:09:00.946 | 99.99th=[46400] 00:09:00.946 bw ( KiB/s): min=16448, max=18808, per=31.10%, avg=17628.00, stdev=1668.77, samples=2 00:09:00.946 iops : min= 4112, max= 4702, avg=4407.00, stdev=417.19, samples=2 00:09:00.946 lat (usec) : 1000=0.02% 00:09:00.946 lat (msec) : 4=0.14%, 10=16.43%, 20=70.13%, 50=13.28% 00:09:00.946 cpu : usr=4.85%, sys=8.02%, ctx=398, majf=0, minf=2 00:09:00.946 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:00.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:00.946 issued rwts: total=4096,4534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:00.946 job1: (groupid=0, jobs=1): err= 0: pid=1398380: Mon Dec 9 17:58:23 2024 00:09:00.946 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:09:00.946 slat (usec): min=2, max=19864, avg=187.13, stdev=1195.48 00:09:00.946 clat (msec): min=5, max=117, avg=27.41, stdev=16.04 00:09:00.946 lat (msec): min=5, max=121, avg=27.60, stdev=16.07 00:09:00.946 clat percentiles (msec): 00:09:00.946 | 1.00th=[ 10], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 19], 00:09:00.946 | 30.00th=[ 19], 40.00th=[ 20], 50.00th=[ 23], 60.00th=[ 27], 00:09:00.946 | 70.00th=[ 30], 80.00th=[ 34], 90.00th=[ 47], 95.00th=[ 52], 00:09:00.947 | 99.00th=[ 106], 99.50th=[ 115], 99.90th=[ 118], 99.95th=[ 118], 00:09:00.947 | 99.99th=[ 118] 00:09:00.947 write: IOPS=2666, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1005msec); 0 zone resets 00:09:00.947 slat (usec): min=3, max=19803, avg=170.10, stdev=979.99 00:09:00.947 clat (msec): min=4, max=121, avg=21.38, stdev=17.40 00:09:00.947 lat (msec): min=5, max=121, avg=21.55, stdev=17.55 00:09:00.947 clat percentiles (msec): 00:09:00.947 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:09:00.947 | 30.00th=[ 12], 40.00th=[ 16], 50.00th=[ 19], 60.00th=[ 20], 00:09:00.947 | 70.00th=[ 22], 80.00th=[ 27], 90.00th=[ 35], 95.00th=[ 50], 00:09:00.947 | 99.00th=[ 115], 99.50th=[ 118], 99.90th=[ 123], 99.95th=[ 123], 00:09:00.947 | 99.99th=[ 123] 00:09:00.947 bw ( KiB/s): min= 8192, max=12288, per=18.07%, avg=10240.00, stdev=2896.31, samples=2 00:09:00.947 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:09:00.947 lat (msec) : 10=7.77%, 20=44.85%, 50=41.34%, 100=4.58%, 250=1.47% 00:09:00.947 cpu : usr=3.19%, sys=5.78%, ctx=298, majf=0, minf=1 00:09:00.947 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:00.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:00.947 issued rwts: total=2560,2680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:00.947 job2: (groupid=0, jobs=1): err= 0: pid=1398405: Mon Dec 9 17:58:23 2024 00:09:00.947 read: IOPS=3103, BW=12.1MiB/s (12.7MB/s)(12.3MiB/1015msec) 00:09:00.947 slat (usec): min=2, max=13414, avg=168.04, stdev=1010.32 00:09:00.947 clat (usec): min=6079, max=73841, avg=19244.38, stdev=10765.70 00:09:00.947 lat (usec): min=6087, max=73847, avg=19412.42, stdev=10858.18 00:09:00.947 clat percentiles (usec): 00:09:00.947 | 1.00th=[ 9765], 5.00th=[11207], 10.00th=[12125], 20.00th=[12387], 00:09:00.947 | 30.00th=[12649], 40.00th=[13173], 50.00th=[15401], 60.00th=[18482], 00:09:00.947 | 70.00th=[20055], 80.00th=[22414], 90.00th=[31065], 95.00th=[43779], 00:09:00.947 | 99.00th=[68682], 99.50th=[70779], 99.90th=[73925], 99.95th=[73925], 00:09:00.947 | 99.99th=[73925] 00:09:00.947 write: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1015msec); 0 zone resets 00:09:00.947 slat (usec): min=3, max=8891, avg=123.27, stdev=611.82 00:09:00.947 clat (usec): min=2998, max=73832, avg=18880.12, stdev=9298.09 00:09:00.947 lat (usec): min=3005, max=73840, avg=19003.40, stdev=9335.57 00:09:00.947 clat percentiles (usec): 00:09:00.947 | 1.00th=[ 5276], 5.00th=[ 9503], 10.00th=[11076], 20.00th=[12387], 00:09:00.947 | 30.00th=[13042], 40.00th=[15270], 50.00th=[16909], 60.00th=[18482], 00:09:00.947 | 70.00th=[21890], 80.00th=[23200], 90.00th=[26084], 95.00th=[36439], 00:09:00.947 | 99.00th=[60556], 99.50th=[62129], 99.90th=[64226], 99.95th=[73925], 00:09:00.947 | 99.99th=[73925] 00:09:00.947 bw ( KiB/s): min=12192, max=16080, per=24.94%, avg=14136.00, stdev=2749.23, samples=2 00:09:00.947 iops : min= 3048, max= 4020, avg=3534.00, stdev=687.31, samples=2 00:09:00.947 lat (msec) : 4=0.21%, 10=3.52%, 20=63.04%, 50=30.62%, 100=2.61% 00:09:00.947 cpu : usr=2.17%, sys=4.83%, ctx=313, majf=0, minf=1 00:09:00.947 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:00.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:00.947 issued rwts: total=3150,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:00.947 job3: (groupid=0, jobs=1): err= 0: pid=1398406: Mon Dec 9 17:58:23 2024 00:09:00.947 read: IOPS=3419, BW=13.4MiB/s (14.0MB/s)(13.4MiB/1007msec) 00:09:00.947 slat (usec): min=2, max=14258, avg=131.95, stdev=820.97 00:09:00.947 clat (usec): min=4785, max=51070, avg=16806.71, stdev=7334.33 00:09:00.947 lat (usec): min=4794, max=51074, avg=16938.67, stdev=7383.57 00:09:00.947 clat percentiles (usec): 00:09:00.947 | 1.00th=[ 8356], 5.00th=[10159], 10.00th=[10552], 20.00th=[10945], 00:09:00.947 | 30.00th=[12387], 40.00th=[12649], 50.00th=[13566], 60.00th=[15533], 00:09:00.947 | 70.00th=[18482], 80.00th=[23200], 90.00th=[28443], 95.00th=[31327], 00:09:00.947 | 99.00th=[43779], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:09:00.947 | 99.99th=[51119] 00:09:00.947 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:09:00.947 slat (usec): min=3, max=11275, avg=140.50, stdev=702.36 00:09:00.947 clat (usec): min=3915, max=77096, avg=19403.22, stdev=13371.55 00:09:00.947 lat (usec): min=3924, max=77116, avg=19543.72, stdev=13443.83 00:09:00.947 clat percentiles (usec): 00:09:00.947 | 1.00th=[ 5145], 5.00th=[ 8291], 10.00th=[ 8848], 20.00th=[11863], 00:09:00.947 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13698], 60.00th=[14615], 00:09:00.947 | 70.00th=[22938], 80.00th=[23462], 90.00th=[39060], 95.00th=[52691], 00:09:00.947 | 99.00th=[72877], 99.50th=[74974], 99.90th=[77071], 99.95th=[77071], 00:09:00.947 | 99.99th=[77071] 00:09:00.947 bw ( KiB/s): min=12288, max=16384, per=25.29%, avg=14336.00, stdev=2896.31, samples=2 00:09:00.947 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:09:00.947 lat (msec) : 4=0.09%, 10=7.63%, 20=62.30%, 50=27.17%, 100=2.82% 00:09:00.947 cpu : usr=4.27%, sys=7.65%, ctx=429, majf=0, minf=1 00:09:00.947 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:00.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:00.947 issued rwts: total=3443,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:00.947 00:09:00.947 Run status group 0 (all jobs): 00:09:00.947 READ: bw=51.0MiB/s (53.5MB/s), 9.95MiB/s-15.8MiB/s (10.4MB/s-16.6MB/s), io=51.8MiB (54.3MB), run=1005-1015msec 00:09:00.947 WRITE: bw=55.3MiB/s (58.0MB/s), 10.4MiB/s-17.5MiB/s (10.9MB/s-18.4MB/s), io=56.2MiB (58.9MB), run=1005-1015msec 00:09:00.947 00:09:00.947 Disk stats (read/write): 00:09:00.947 nvme0n1: ios=3637/3679, merge=0/0, ticks=47709/56301, in_queue=104010, util=97.80% 00:09:00.947 nvme0n2: ios=2095/2367, merge=0/0, ticks=17560/23409, in_queue=40969, util=97.15% 00:09:00.947 nvme0n3: ios=2612/3072, merge=0/0, ticks=23609/29090, in_queue=52699, util=96.66% 00:09:00.947 nvme0n4: ios=3126/3095, merge=0/0, ticks=36689/46090, in_queue=82779, util=96.00% 00:09:00.947 17:58:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:00.947 17:58:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1398543 00:09:00.947 17:58:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:00.947 17:58:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:00.947 [global] 00:09:00.947 thread=1 00:09:00.947 invalidate=1 00:09:00.947 rw=read 00:09:00.947 time_based=1 00:09:00.947 runtime=10 00:09:00.947 ioengine=libaio 00:09:00.947 direct=1 00:09:00.947 bs=4096 00:09:00.947 iodepth=1 00:09:00.947 norandommap=1 00:09:00.947 numjobs=1 00:09:00.947 00:09:00.947 [job0] 00:09:00.947 filename=/dev/nvme0n1 00:09:00.947 [job1] 00:09:00.947 filename=/dev/nvme0n2 00:09:00.947 [job2] 00:09:00.947 filename=/dev/nvme0n3 00:09:00.947 [job3] 00:09:00.947 filename=/dev/nvme0n4 00:09:00.947 Could not set queue depth (nvme0n1) 00:09:00.947 Could not set queue depth (nvme0n2) 00:09:00.947 Could not set queue depth (nvme0n3) 00:09:00.947 Could not set queue depth (nvme0n4) 00:09:01.232 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.232 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.232 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.232 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.232 fio-3.35 00:09:01.232 Starting 4 threads 00:09:04.513 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:04.513 17:58:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:04.513 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=2666496, buflen=4096 00:09:04.513 fio: pid=1398647, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:04.513 17:58:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:04.513 17:58:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:04.513 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=11264000, buflen=4096 00:09:04.513 fio: pid=1398646, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:04.771 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=56680448, buflen=4096 00:09:04.771 fio: pid=1398644, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:04.771 17:58:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:04.771 17:58:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:05.029 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=25935872, buflen=4096 00:09:05.029 fio: pid=1398645, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:05.030 17:58:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:05.030 17:58:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:05.288 00:09:05.288 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1398644: Mon Dec 9 17:58:28 2024 00:09:05.288 read: IOPS=3927, BW=15.3MiB/s (16.1MB/s)(54.1MiB/3524msec) 00:09:05.288 slat (usec): min=5, max=35658, avg=17.23, stdev=427.13 00:09:05.288 clat (usec): min=173, max=912, avg=232.74, stdev=40.32 00:09:05.288 lat (usec): min=178, max=35921, avg=249.97, stdev=429.92 00:09:05.288 clat percentiles (usec): 00:09:05.288 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 202], 00:09:05.288 | 30.00th=[ 208], 40.00th=[ 219], 50.00th=[ 229], 60.00th=[ 237], 00:09:05.288 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 281], 95.00th=[ 297], 00:09:05.288 | 99.00th=[ 412], 99.50th=[ 453], 99.90th=[ 545], 99.95th=[ 562], 00:09:05.288 | 99.99th=[ 644] 00:09:05.288 bw ( KiB/s): min=14584, max=16952, per=63.84%, avg=15728.00, stdev=932.98, samples=6 00:09:05.288 iops : min= 3646, max= 4238, avg=3932.00, stdev=233.24, samples=6 00:09:05.288 lat (usec) : 250=77.66%, 500=22.13%, 750=0.19%, 1000=0.01% 00:09:05.288 cpu : usr=3.07%, sys=6.56%, ctx=13846, majf=0, minf=2 00:09:05.288 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:05.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.288 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.288 issued rwts: total=13839,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.288 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:05.288 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1398645: Mon Dec 9 17:58:28 2024 00:09:05.288 read: IOPS=1654, BW=6618KiB/s (6777kB/s)(24.7MiB/3827msec) 00:09:05.288 slat (usec): min=5, max=15690, avg=19.88, stdev=342.99 00:09:05.288 clat (usec): min=174, max=42054, avg=577.39, stdev=3718.73 00:09:05.288 lat (usec): min=179, max=42063, avg=597.27, stdev=3734.44 00:09:05.288 clat percentiles (usec): 00:09:05.288 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 210], 00:09:05.288 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 241], 00:09:05.288 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 285], 00:09:05.288 | 99.00th=[ 545], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:05.288 | 99.99th=[42206] 00:09:05.288 bw ( KiB/s): min= 104, max=15326, per=24.28%, avg=5982.57, stdev=5399.66, samples=7 00:09:05.288 iops : min= 26, max= 3831, avg=1495.57, stdev=1349.77, samples=7 00:09:05.288 lat (usec) : 250=75.00%, 500=23.80%, 750=0.30% 00:09:05.288 lat (msec) : 2=0.02%, 4=0.02%, 10=0.03%, 50=0.82% 00:09:05.288 cpu : usr=0.99%, sys=2.64%, ctx=6340, majf=0, minf=1 00:09:05.288 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:05.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.289 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.289 issued rwts: total=6333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.289 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:05.289 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1398646: Mon Dec 9 17:58:28 2024 00:09:05.289 read: IOPS=847, BW=3388KiB/s (3469kB/s)(10.7MiB/3247msec) 00:09:05.289 slat (nsec): min=4780, max=62696, avg=11017.07, stdev=5657.57 00:09:05.289 clat (usec): min=195, max=42185, avg=1158.77, stdev=6099.69 00:09:05.289 lat (usec): min=200, max=42197, avg=1169.78, stdev=6100.41 00:09:05.289 clat percentiles (usec): 00:09:05.289 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 217], 00:09:05.289 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:09:05.289 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 265], 95.00th=[ 314], 00:09:05.289 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:05.289 | 99.99th=[42206] 00:09:05.289 bw ( KiB/s): min= 128, max= 7464, per=14.85%, avg=3658.67, stdev=3268.76, samples=6 00:09:05.289 iops : min= 32, max= 1866, avg=914.67, stdev=817.19, samples=6 00:09:05.289 lat (usec) : 250=86.37%, 500=11.27%, 750=0.07% 00:09:05.289 lat (msec) : 50=2.25% 00:09:05.289 cpu : usr=0.37%, sys=1.05%, ctx=2756, majf=0, minf=1 00:09:05.289 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:05.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.289 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.289 issued rwts: total=2751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.289 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:05.289 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1398647: Mon Dec 9 17:58:28 2024 00:09:05.289 read: IOPS=220, BW=880KiB/s (901kB/s)(2604KiB/2958msec) 00:09:05.289 slat (nsec): min=6363, max=38699, avg=13019.82, stdev=7244.00 00:09:05.289 clat (usec): min=201, max=41494, avg=4492.29, stdev=12357.74 00:09:05.289 lat (usec): min=209, max=41529, avg=4505.30, stdev=12360.76 00:09:05.289 clat percentiles (usec): 00:09:05.289 | 1.00th=[ 217], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 277], 00:09:05.289 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 297], 00:09:05.289 | 70.00th=[ 306], 80.00th=[ 408], 90.00th=[40633], 95.00th=[41157], 00:09:05.289 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:09:05.289 | 99.99th=[41681] 00:09:05.289 bw ( KiB/s): min= 128, max= 3520, per=4.04%, avg=995.20, stdev=1451.00, samples=5 00:09:05.289 iops : min= 32, max= 880, avg=248.80, stdev=362.75, samples=5 00:09:05.289 lat (usec) : 250=2.76%, 500=83.74%, 750=3.07% 00:09:05.289 lat (msec) : 50=10.28% 00:09:05.289 cpu : usr=0.07%, sys=0.44%, ctx=653, majf=0, minf=2 00:09:05.289 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:05.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.289 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.289 issued rwts: total=652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.289 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:05.289 00:09:05.289 Run status group 0 (all jobs): 00:09:05.289 READ: bw=24.1MiB/s (25.2MB/s), 880KiB/s-15.3MiB/s (901kB/s-16.1MB/s), io=92.1MiB (96.5MB), run=2958-3827msec 00:09:05.289 00:09:05.289 Disk stats (read/write): 00:09:05.289 nvme0n1: ios=13200/0, merge=0/0, ticks=2979/0, in_queue=2979, util=93.85% 00:09:05.289 nvme0n2: ios=5644/0, merge=0/0, ticks=4062/0, in_queue=4062, util=98.37% 00:09:05.289 nvme0n3: ios=2795/0, merge=0/0, ticks=4035/0, in_queue=4035, util=99.78% 00:09:05.289 nvme0n4: ios=696/0, merge=0/0, ticks=3806/0, in_queue=3806, util=99.42% 00:09:05.289 17:58:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:05.289 17:58:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:05.854 17:58:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:05.854 17:58:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:06.112 17:58:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:06.112 17:58:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:06.370 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:06.370 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:06.628 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:06.628 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1398543 00:09:06.628 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:06.628 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:06.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.628 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:06.628 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:06.628 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:06.628 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:06.628 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:06.628 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:06.628 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:06.628 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:06.628 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:06.628 nvmf hotplug test: fio failed as expected 00:09:06.628 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:06.886 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:06.886 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:06.886 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:06.886 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:06.886 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:06.886 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:06.886 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:06.886 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:06.886 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:06.886 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:06.886 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:07.144 rmmod nvme_tcp 00:09:07.144 rmmod nvme_fabrics 00:09:07.144 rmmod nvme_keyring 00:09:07.144 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:07.144 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:07.144 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:07.144 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1396507 ']' 00:09:07.144 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1396507 00:09:07.144 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1396507 ']' 00:09:07.144 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1396507 00:09:07.144 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:07.144 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.144 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1396507 00:09:07.144 17:58:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.144 17:58:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.144 17:58:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1396507' 00:09:07.144 killing process with pid 1396507 00:09:07.144 17:58:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1396507 00:09:07.144 17:58:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1396507 00:09:07.402 17:58:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:07.402 17:58:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:07.402 17:58:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:07.402 17:58:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:07.402 17:58:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:07.402 17:58:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:07.402 17:58:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:07.402 17:58:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:07.402 17:58:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:07.402 17:58:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.402 17:58:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.402 17:58:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.310 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:09.310 00:09:09.310 real 0m24.458s 00:09:09.310 user 1m25.668s 00:09:09.310 sys 0m7.070s 00:09:09.310 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.310 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:09.310 ************************************ 00:09:09.310 END TEST nvmf_fio_target 00:09:09.310 ************************************ 00:09:09.310 17:58:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:09.310 17:58:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:09.310 17:58:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.310 17:58:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.569 ************************************ 00:09:09.569 START TEST nvmf_bdevio 00:09:09.569 ************************************ 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:09.569 * Looking for test storage... 00:09:09.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:09.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.569 --rc genhtml_branch_coverage=1 00:09:09.569 --rc genhtml_function_coverage=1 00:09:09.569 --rc genhtml_legend=1 00:09:09.569 --rc geninfo_all_blocks=1 00:09:09.569 --rc geninfo_unexecuted_blocks=1 00:09:09.569 00:09:09.569 ' 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:09.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.569 --rc genhtml_branch_coverage=1 00:09:09.569 --rc genhtml_function_coverage=1 00:09:09.569 --rc genhtml_legend=1 00:09:09.569 --rc geninfo_all_blocks=1 00:09:09.569 --rc geninfo_unexecuted_blocks=1 00:09:09.569 00:09:09.569 ' 00:09:09.569 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:09.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.569 --rc genhtml_branch_coverage=1 00:09:09.569 --rc genhtml_function_coverage=1 00:09:09.569 --rc genhtml_legend=1 00:09:09.569 --rc geninfo_all_blocks=1 00:09:09.569 --rc geninfo_unexecuted_blocks=1 00:09:09.569 00:09:09.569 ' 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:09.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.570 --rc genhtml_branch_coverage=1 00:09:09.570 --rc genhtml_function_coverage=1 00:09:09.570 --rc genhtml_legend=1 00:09:09.570 --rc geninfo_all_blocks=1 00:09:09.570 --rc geninfo_unexecuted_blocks=1 00:09:09.570 00:09:09.570 ' 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:09.570 17:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:12.105 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:12.105 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:12.105 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:12.105 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:12.105 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:12.105 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:12.105 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:12.105 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:12.105 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:12.105 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:12.105 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:12.105 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:12.105 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:12.105 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:12.105 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:12.105 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:12.105 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:12.105 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:12.105 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:12.105 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:12.106 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:12.106 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:12.106 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:12.106 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:12.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:09:12.106 00:09:12.106 --- 10.0.0.2 ping statistics --- 00:09:12.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.106 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:12.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:09:12.106 00:09:12.106 --- 10.0.0.1 ping statistics --- 00:09:12.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.106 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1401302 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1401302 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1401302 ']' 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.106 17:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:12.106 [2024-12-09 17:58:34.896801] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:09:12.106 [2024-12-09 17:58:34.896897] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.107 [2024-12-09 17:58:34.972116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.107 [2024-12-09 17:58:35.030980] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.107 [2024-12-09 17:58:35.031036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.107 [2024-12-09 17:58:35.031050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.107 [2024-12-09 17:58:35.031075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.107 [2024-12-09 17:58:35.031085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.107 [2024-12-09 17:58:35.032845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:12.107 [2024-12-09 17:58:35.032913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:12.107 [2024-12-09 17:58:35.032957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:12.107 [2024-12-09 17:58:35.032960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.364 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.364 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:12.364 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:12.364 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:12.364 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:12.364 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.364 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:12.364 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.364 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:12.364 [2024-12-09 17:58:35.188723] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:12.364 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:12.365 Malloc0 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:12.365 [2024-12-09 17:58:35.259031] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:12.365 { 00:09:12.365 "params": { 00:09:12.365 "name": "Nvme$subsystem", 00:09:12.365 "trtype": "$TEST_TRANSPORT", 00:09:12.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.365 "adrfam": "ipv4", 00:09:12.365 "trsvcid": "$NVMF_PORT", 00:09:12.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.365 "hdgst": ${hdgst:-false}, 00:09:12.365 "ddgst": ${ddgst:-false} 00:09:12.365 }, 00:09:12.365 "method": "bdev_nvme_attach_controller" 00:09:12.365 } 00:09:12.365 EOF 00:09:12.365 )") 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:12.365 17:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:12.365 "params": { 00:09:12.365 "name": "Nvme1", 00:09:12.365 "trtype": "tcp", 00:09:12.365 "traddr": "10.0.0.2", 00:09:12.365 "adrfam": "ipv4", 00:09:12.365 "trsvcid": "4420", 00:09:12.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:12.365 "hdgst": false, 00:09:12.365 "ddgst": false 00:09:12.365 }, 00:09:12.365 "method": "bdev_nvme_attach_controller" 00:09:12.365 }' 00:09:12.365 [2024-12-09 17:58:35.309009] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:09:12.365 [2024-12-09 17:58:35.309087] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1401447 ] 00:09:12.365 [2024-12-09 17:58:35.379555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:12.622 [2024-12-09 17:58:35.444838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.622 [2024-12-09 17:58:35.444894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.622 [2024-12-09 17:58:35.444897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.880 I/O targets: 00:09:12.880 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:12.880 00:09:12.880 00:09:12.880 CUnit - A unit testing framework for C - Version 2.1-3 00:09:12.880 http://cunit.sourceforge.net/ 00:09:12.880 00:09:12.880 00:09:12.880 Suite: bdevio tests on: Nvme1n1 00:09:12.880 Test: blockdev write read block ...passed 00:09:12.880 Test: blockdev write zeroes read block ...passed 00:09:12.880 Test: blockdev write zeroes read no split ...passed 00:09:12.880 Test: blockdev write zeroes read split ...passed 00:09:12.880 Test: blockdev write zeroes read split partial ...passed 00:09:12.880 Test: blockdev reset ...[2024-12-09 17:58:35.854338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:12.880 [2024-12-09 17:58:35.854445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61a8c0 (9): Bad file descriptor 00:09:12.880 [2024-12-09 17:58:35.870925] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:12.880 passed 00:09:12.880 Test: blockdev write read 8 blocks ...passed 00:09:12.880 Test: blockdev write read size > 128k ...passed 00:09:12.880 Test: blockdev write read invalid size ...passed 00:09:13.138 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:13.138 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:13.138 Test: blockdev write read max offset ...passed 00:09:13.138 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:13.138 Test: blockdev writev readv 8 blocks ...passed 00:09:13.138 Test: blockdev writev readv 30 x 1block ...passed 00:09:13.138 Test: blockdev writev readv block ...passed 00:09:13.138 Test: blockdev writev readv size > 128k ...passed 00:09:13.138 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:13.138 Test: blockdev comparev and writev ...[2024-12-09 17:58:36.082633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.138 [2024-12-09 17:58:36.082671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:13.138 [2024-12-09 17:58:36.082696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.138 [2024-12-09 17:58:36.082714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:13.138 [2024-12-09 17:58:36.083045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.138 [2024-12-09 17:58:36.083070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:13.138 [2024-12-09 17:58:36.083092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.138 [2024-12-09 17:58:36.083109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:13.138 [2024-12-09 17:58:36.083427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.138 [2024-12-09 17:58:36.083452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:13.138 [2024-12-09 17:58:36.083475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.138 [2024-12-09 17:58:36.083515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:13.138 [2024-12-09 17:58:36.083858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.138 [2024-12-09 17:58:36.083883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:13.138 [2024-12-09 17:58:36.083905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:13.138 [2024-12-09 17:58:36.083922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:13.138 passed 00:09:13.138 Test: blockdev nvme passthru rw ...passed 00:09:13.138 Test: blockdev nvme passthru vendor specific ...[2024-12-09 17:58:36.165824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:13.138 [2024-12-09 17:58:36.165854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:13.138 [2024-12-09 17:58:36.165997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:13.138 [2024-12-09 17:58:36.166019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:13.138 [2024-12-09 17:58:36.166152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:13.138 [2024-12-09 17:58:36.166176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:13.138 [2024-12-09 17:58:36.166311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:13.138 [2024-12-09 17:58:36.166335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:13.138 passed 00:09:13.397 Test: blockdev nvme admin passthru ...passed 00:09:13.397 Test: blockdev copy ...passed 00:09:13.397 00:09:13.397 Run Summary: Type Total Ran Passed Failed Inactive 00:09:13.397 suites 1 1 n/a 0 0 00:09:13.397 tests 23 23 23 0 0 00:09:13.397 asserts 152 152 152 0 n/a 00:09:13.397 00:09:13.397 Elapsed time = 0.966 seconds 00:09:13.397 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:13.397 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.397 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:13.397 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.397 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:13.397 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:13.397 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:13.397 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:13.655 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:13.655 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:13.655 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:13.655 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:13.655 rmmod nvme_tcp 00:09:13.655 rmmod nvme_fabrics 00:09:13.655 rmmod nvme_keyring 00:09:13.655 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:13.655 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:13.655 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:13.655 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1401302 ']' 00:09:13.655 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1401302 00:09:13.655 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1401302 ']' 00:09:13.655 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1401302 00:09:13.655 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:13.655 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.655 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1401302 00:09:13.655 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:13.655 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:13.655 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1401302' 00:09:13.655 killing process with pid 1401302 00:09:13.655 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1401302 00:09:13.655 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1401302 00:09:13.914 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:13.914 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:13.914 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:13.914 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:13.914 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:13.914 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:13.914 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:13.914 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:13.914 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:13.914 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.914 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.914 17:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.821 17:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:15.821 00:09:15.821 real 0m6.488s 00:09:15.821 user 0m10.118s 00:09:15.821 sys 0m2.188s 00:09:15.821 17:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.821 17:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:15.821 ************************************ 00:09:15.821 END TEST nvmf_bdevio 00:09:15.821 ************************************ 00:09:16.080 17:58:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:16.080 00:09:16.080 real 3m56.722s 00:09:16.080 user 10m18.543s 00:09:16.080 sys 1m7.262s 00:09:16.080 17:58:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.080 17:58:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.080 ************************************ 00:09:16.080 END TEST nvmf_target_core 00:09:16.080 ************************************ 00:09:16.080 17:58:38 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:16.080 17:58:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:16.080 17:58:38 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.080 17:58:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:16.080 ************************************ 00:09:16.080 START TEST nvmf_target_extra 00:09:16.080 ************************************ 00:09:16.080 17:58:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:16.080 * Looking for test storage... 00:09:16.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:16.080 17:58:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:16.080 17:58:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:09:16.080 17:58:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:16.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.080 --rc genhtml_branch_coverage=1 00:09:16.080 --rc genhtml_function_coverage=1 00:09:16.080 --rc genhtml_legend=1 00:09:16.080 --rc geninfo_all_blocks=1 00:09:16.080 --rc geninfo_unexecuted_blocks=1 00:09:16.080 00:09:16.080 ' 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:16.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.080 --rc genhtml_branch_coverage=1 00:09:16.080 --rc genhtml_function_coverage=1 00:09:16.080 --rc genhtml_legend=1 00:09:16.080 --rc geninfo_all_blocks=1 00:09:16.080 --rc geninfo_unexecuted_blocks=1 00:09:16.080 00:09:16.080 ' 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:16.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.080 --rc genhtml_branch_coverage=1 00:09:16.080 --rc genhtml_function_coverage=1 00:09:16.080 --rc genhtml_legend=1 00:09:16.080 --rc geninfo_all_blocks=1 00:09:16.080 --rc geninfo_unexecuted_blocks=1 00:09:16.080 00:09:16.080 ' 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:16.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.080 --rc genhtml_branch_coverage=1 00:09:16.080 --rc genhtml_function_coverage=1 00:09:16.080 --rc genhtml_legend=1 00:09:16.080 --rc geninfo_all_blocks=1 00:09:16.080 --rc geninfo_unexecuted_blocks=1 00:09:16.080 00:09:16.080 ' 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.080 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.081 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.081 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.081 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.081 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.081 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.081 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.081 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:16.081 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:16.081 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:16.081 17:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:16.081 17:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:16.081 17:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.081 17:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:16.339 ************************************ 00:09:16.339 START TEST nvmf_example 00:09:16.339 ************************************ 00:09:16.339 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:16.340 * Looking for test storage... 00:09:16.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:16.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.340 --rc genhtml_branch_coverage=1 00:09:16.340 --rc genhtml_function_coverage=1 00:09:16.340 --rc genhtml_legend=1 00:09:16.340 --rc geninfo_all_blocks=1 00:09:16.340 --rc geninfo_unexecuted_blocks=1 00:09:16.340 00:09:16.340 ' 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:16.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.340 --rc genhtml_branch_coverage=1 00:09:16.340 --rc genhtml_function_coverage=1 00:09:16.340 --rc genhtml_legend=1 00:09:16.340 --rc geninfo_all_blocks=1 00:09:16.340 --rc geninfo_unexecuted_blocks=1 00:09:16.340 00:09:16.340 ' 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:16.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.340 --rc genhtml_branch_coverage=1 00:09:16.340 --rc genhtml_function_coverage=1 00:09:16.340 --rc genhtml_legend=1 00:09:16.340 --rc geninfo_all_blocks=1 00:09:16.340 --rc geninfo_unexecuted_blocks=1 00:09:16.340 00:09:16.340 ' 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:16.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.340 --rc genhtml_branch_coverage=1 00:09:16.340 --rc genhtml_function_coverage=1 00:09:16.340 --rc genhtml_legend=1 00:09:16.340 --rc geninfo_all_blocks=1 00:09:16.340 --rc geninfo_unexecuted_blocks=1 00:09:16.340 00:09:16.340 ' 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:16.340 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:16.341 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:16.341 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:16.341 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:16.341 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:16.341 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:16.341 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:16.341 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:16.341 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.341 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:16.341 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:16.341 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:16.341 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.341 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.341 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.341 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:16.341 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:16.341 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:16.341 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:18.872 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.872 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:18.872 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:18.872 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:18.872 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:18.872 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:18.872 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:18.872 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:18.872 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:18.872 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:18.873 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:18.873 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:18.873 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:18.873 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:18.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:09:18.873 00:09:18.873 --- 10.0.0.2 ping statistics --- 00:09:18.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.873 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:18.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:09:18.873 00:09:18.873 --- 10.0.0.1 ping statistics --- 00:09:18.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.873 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:18.873 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:18.874 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:18.874 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1403591 00:09:18.874 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:18.874 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:18.874 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1403591 00:09:18.874 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1403591 ']' 00:09:18.874 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.874 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.874 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.874 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.874 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:19.132 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.132 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:19.132 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:19.132 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:19.132 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:19.132 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:19.132 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.132 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:19.132 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.132 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:19.132 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.132 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:19.132 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.132 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:19.132 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:19.132 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.132 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:19.132 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.132 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:19.132 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:19.132 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.132 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:19.132 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.132 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.132 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.132 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:19.132 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.132 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:19.132 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:31.331 Initializing NVMe Controllers 00:09:31.331 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:31.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:31.331 Initialization complete. Launching workers. 00:09:31.331 ======================================================== 00:09:31.331 Latency(us) 00:09:31.331 Device Information : IOPS MiB/s Average min max 00:09:31.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14321.74 55.94 4469.08 664.64 15370.68 00:09:31.331 ======================================================== 00:09:31.331 Total : 14321.74 55.94 4469.08 664.64 15370.68 00:09:31.331 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:31.331 rmmod nvme_tcp 00:09:31.331 rmmod nvme_fabrics 00:09:31.331 rmmod nvme_keyring 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1403591 ']' 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1403591 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1403591 ']' 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1403591 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1403591 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1403591' 00:09:31.331 killing process with pid 1403591 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1403591 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1403591 00:09:31.331 nvmf threads initialize successfully 00:09:31.331 bdev subsystem init successfully 00:09:31.331 created a nvmf target service 00:09:31.331 create targets's poll groups done 00:09:31.331 all subsystems of target started 00:09:31.331 nvmf target is running 00:09:31.331 all subsystems of target stopped 00:09:31.331 destroy targets's poll groups done 00:09:31.331 destroyed the nvmf target service 00:09:31.331 bdev subsystem finish successfully 00:09:31.331 nvmf threads destroy successfully 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.331 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.590 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:31.590 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:31.590 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:31.590 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.851 00:09:31.851 real 0m15.502s 00:09:31.851 user 0m41.341s 00:09:31.851 sys 0m3.898s 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.851 ************************************ 00:09:31.851 END TEST nvmf_example 00:09:31.851 ************************************ 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:31.851 ************************************ 00:09:31.851 START TEST nvmf_filesystem 00:09:31.851 ************************************ 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:31.851 * Looking for test storage... 00:09:31.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:31.851 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:31.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.852 --rc genhtml_branch_coverage=1 00:09:31.852 --rc genhtml_function_coverage=1 00:09:31.852 --rc genhtml_legend=1 00:09:31.852 --rc geninfo_all_blocks=1 00:09:31.852 --rc geninfo_unexecuted_blocks=1 00:09:31.852 00:09:31.852 ' 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:31.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.852 --rc genhtml_branch_coverage=1 00:09:31.852 --rc genhtml_function_coverage=1 00:09:31.852 --rc genhtml_legend=1 00:09:31.852 --rc geninfo_all_blocks=1 00:09:31.852 --rc geninfo_unexecuted_blocks=1 00:09:31.852 00:09:31.852 ' 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:31.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.852 --rc genhtml_branch_coverage=1 00:09:31.852 --rc genhtml_function_coverage=1 00:09:31.852 --rc genhtml_legend=1 00:09:31.852 --rc geninfo_all_blocks=1 00:09:31.852 --rc geninfo_unexecuted_blocks=1 00:09:31.852 00:09:31.852 ' 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:31.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.852 --rc genhtml_branch_coverage=1 00:09:31.852 --rc genhtml_function_coverage=1 00:09:31.852 --rc genhtml_legend=1 00:09:31.852 --rc geninfo_all_blocks=1 00:09:31.852 --rc geninfo_unexecuted_blocks=1 00:09:31.852 00:09:31.852 ' 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:31.852 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:31.853 #define SPDK_CONFIG_H 00:09:31.853 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:31.853 #define SPDK_CONFIG_APPS 1 00:09:31.853 #define SPDK_CONFIG_ARCH native 00:09:31.853 #undef SPDK_CONFIG_ASAN 00:09:31.853 #undef SPDK_CONFIG_AVAHI 00:09:31.853 #undef SPDK_CONFIG_CET 00:09:31.853 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:31.853 #define SPDK_CONFIG_COVERAGE 1 00:09:31.853 #define SPDK_CONFIG_CROSS_PREFIX 00:09:31.853 #undef SPDK_CONFIG_CRYPTO 00:09:31.853 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:31.853 #undef SPDK_CONFIG_CUSTOMOCF 00:09:31.853 #undef SPDK_CONFIG_DAOS 00:09:31.853 #define SPDK_CONFIG_DAOS_DIR 00:09:31.853 #define SPDK_CONFIG_DEBUG 1 00:09:31.853 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:31.853 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:31.853 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:31.853 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:31.853 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:31.853 #undef SPDK_CONFIG_DPDK_UADK 00:09:31.853 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:31.853 #define SPDK_CONFIG_EXAMPLES 1 00:09:31.853 #undef SPDK_CONFIG_FC 00:09:31.853 #define SPDK_CONFIG_FC_PATH 00:09:31.853 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:31.853 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:31.853 #define SPDK_CONFIG_FSDEV 1 00:09:31.853 #undef SPDK_CONFIG_FUSE 00:09:31.853 #undef SPDK_CONFIG_FUZZER 00:09:31.853 #define SPDK_CONFIG_FUZZER_LIB 00:09:31.853 #undef SPDK_CONFIG_GOLANG 00:09:31.853 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:31.853 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:31.853 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:31.853 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:31.853 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:31.853 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:31.853 #undef SPDK_CONFIG_HAVE_LZ4 00:09:31.853 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:31.853 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:31.853 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:31.853 #define SPDK_CONFIG_IDXD 1 00:09:31.853 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:31.853 #undef SPDK_CONFIG_IPSEC_MB 00:09:31.853 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:31.853 #define SPDK_CONFIG_ISAL 1 00:09:31.853 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:31.853 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:31.853 #define SPDK_CONFIG_LIBDIR 00:09:31.853 #undef SPDK_CONFIG_LTO 00:09:31.853 #define SPDK_CONFIG_MAX_LCORES 128 00:09:31.853 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:31.853 #define SPDK_CONFIG_NVME_CUSE 1 00:09:31.853 #undef SPDK_CONFIG_OCF 00:09:31.853 #define SPDK_CONFIG_OCF_PATH 00:09:31.853 #define SPDK_CONFIG_OPENSSL_PATH 00:09:31.853 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:31.853 #define SPDK_CONFIG_PGO_DIR 00:09:31.853 #undef SPDK_CONFIG_PGO_USE 00:09:31.853 #define SPDK_CONFIG_PREFIX /usr/local 00:09:31.853 #undef SPDK_CONFIG_RAID5F 00:09:31.853 #undef SPDK_CONFIG_RBD 00:09:31.853 #define SPDK_CONFIG_RDMA 1 00:09:31.853 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:31.853 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:31.853 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:31.853 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:31.853 #define SPDK_CONFIG_SHARED 1 00:09:31.853 #undef SPDK_CONFIG_SMA 00:09:31.853 #define SPDK_CONFIG_TESTS 1 00:09:31.853 #undef SPDK_CONFIG_TSAN 00:09:31.853 #define SPDK_CONFIG_UBLK 1 00:09:31.853 #define SPDK_CONFIG_UBSAN 1 00:09:31.853 #undef SPDK_CONFIG_UNIT_TESTS 00:09:31.853 #undef SPDK_CONFIG_URING 00:09:31.853 #define SPDK_CONFIG_URING_PATH 00:09:31.853 #undef SPDK_CONFIG_URING_ZNS 00:09:31.853 #undef SPDK_CONFIG_USDT 00:09:31.853 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:31.853 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:31.853 #define SPDK_CONFIG_VFIO_USER 1 00:09:31.853 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:31.853 #define SPDK_CONFIG_VHOST 1 00:09:31.853 #define SPDK_CONFIG_VIRTIO 1 00:09:31.853 #undef SPDK_CONFIG_VTUNE 00:09:31.853 #define SPDK_CONFIG_VTUNE_DIR 00:09:31.853 #define SPDK_CONFIG_WERROR 1 00:09:31.853 #define SPDK_CONFIG_WPDK_DIR 00:09:31.853 #undef SPDK_CONFIG_XNVME 00:09:31.853 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:31.853 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:31.854 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:31.854 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:31.854 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:31.854 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:31.854 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:31.854 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:31.854 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:31.854 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:31.854 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:31.854 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:31.854 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:32.115 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:32.116 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1405282 ]] 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1405282 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.HOKB7S 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.HOKB7S/tests/target /tmp/spdk.HOKB7S 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=60232515584 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67273338880 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7040823296 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=33626636288 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=33636667392 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:32.117 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13432246272 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=13454667776 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22421504 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=33636290560 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=33636671488 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=380928 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6727319552 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6727331840 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:32.118 * Looking for test storage... 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=60232515584 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9255415808 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:32.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:32.118 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:32.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.118 --rc genhtml_branch_coverage=1 00:09:32.118 --rc genhtml_function_coverage=1 00:09:32.118 --rc genhtml_legend=1 00:09:32.118 --rc geninfo_all_blocks=1 00:09:32.118 --rc geninfo_unexecuted_blocks=1 00:09:32.118 00:09:32.118 ' 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:32.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.118 --rc genhtml_branch_coverage=1 00:09:32.118 --rc genhtml_function_coverage=1 00:09:32.118 --rc genhtml_legend=1 00:09:32.118 --rc geninfo_all_blocks=1 00:09:32.118 --rc geninfo_unexecuted_blocks=1 00:09:32.118 00:09:32.118 ' 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:32.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.118 --rc genhtml_branch_coverage=1 00:09:32.118 --rc genhtml_function_coverage=1 00:09:32.118 --rc genhtml_legend=1 00:09:32.118 --rc geninfo_all_blocks=1 00:09:32.118 --rc geninfo_unexecuted_blocks=1 00:09:32.118 00:09:32.118 ' 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:32.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.118 --rc genhtml_branch_coverage=1 00:09:32.118 --rc genhtml_function_coverage=1 00:09:32.118 --rc genhtml_legend=1 00:09:32.118 --rc geninfo_all_blocks=1 00:09:32.118 --rc geninfo_unexecuted_blocks=1 00:09:32.118 00:09:32.118 ' 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:32.118 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:32.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:32.119 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:34.652 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.652 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:34.653 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:34.653 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:34.653 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:34.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:09:34.653 00:09:34.653 --- 10.0.0.2 ping statistics --- 00:09:34.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.653 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:34.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:09:34.653 00:09:34.653 --- 10.0.0.1 ping statistics --- 00:09:34.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.653 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:34.653 ************************************ 00:09:34.653 START TEST nvmf_filesystem_no_in_capsule 00:09:34.653 ************************************ 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1406922 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1406922 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1406922 ']' 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.653 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.653 [2024-12-09 17:58:57.476008] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:09:34.653 [2024-12-09 17:58:57.476081] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.653 [2024-12-09 17:58:57.549385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.653 [2024-12-09 17:58:57.610349] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.653 [2024-12-09 17:58:57.610415] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.653 [2024-12-09 17:58:57.610428] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.653 [2024-12-09 17:58:57.610438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.653 [2024-12-09 17:58:57.610447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.654 [2024-12-09 17:58:57.612063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.654 [2024-12-09 17:58:57.612189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.654 [2024-12-09 17:58:57.612236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.654 [2024-12-09 17:58:57.612238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.912 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.912 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:34.912 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:34.912 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:34.912 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.912 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.912 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:34.912 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:34.912 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.912 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.912 [2024-12-09 17:58:57.789298] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.912 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.912 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:34.912 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.912 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:35.170 Malloc1 00:09:35.170 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.170 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:35.170 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.170 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:35.170 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.170 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:35.170 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.170 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:35.170 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.170 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.170 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.170 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:35.170 [2024-12-09 17:58:57.994253] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.170 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.170 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:35.170 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:35.170 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:35.170 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:35.170 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:35.170 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:35.170 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.170 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:35.170 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.170 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:35.170 { 00:09:35.170 "name": "Malloc1", 00:09:35.170 "aliases": [ 00:09:35.170 "9ef54acc-718a-4333-9099-f6419028d64e" 00:09:35.170 ], 00:09:35.170 "product_name": "Malloc disk", 00:09:35.170 "block_size": 512, 00:09:35.170 "num_blocks": 1048576, 00:09:35.170 "uuid": "9ef54acc-718a-4333-9099-f6419028d64e", 00:09:35.170 "assigned_rate_limits": { 00:09:35.170 "rw_ios_per_sec": 0, 00:09:35.170 "rw_mbytes_per_sec": 0, 00:09:35.170 "r_mbytes_per_sec": 0, 00:09:35.170 "w_mbytes_per_sec": 0 00:09:35.170 }, 00:09:35.170 "claimed": true, 00:09:35.170 "claim_type": "exclusive_write", 00:09:35.170 "zoned": false, 00:09:35.170 "supported_io_types": { 00:09:35.170 "read": true, 00:09:35.170 "write": true, 00:09:35.170 "unmap": true, 00:09:35.170 "flush": true, 00:09:35.170 "reset": true, 00:09:35.170 "nvme_admin": false, 00:09:35.170 "nvme_io": false, 00:09:35.170 "nvme_io_md": false, 00:09:35.170 "write_zeroes": true, 00:09:35.170 "zcopy": true, 00:09:35.170 "get_zone_info": false, 00:09:35.170 "zone_management": false, 00:09:35.170 "zone_append": false, 00:09:35.170 "compare": false, 00:09:35.170 "compare_and_write": false, 00:09:35.170 "abort": true, 00:09:35.170 "seek_hole": false, 00:09:35.170 "seek_data": false, 00:09:35.170 "copy": true, 00:09:35.170 "nvme_iov_md": false 00:09:35.170 }, 00:09:35.170 "memory_domains": [ 00:09:35.170 { 00:09:35.170 "dma_device_id": "system", 00:09:35.170 "dma_device_type": 1 00:09:35.170 }, 00:09:35.170 { 00:09:35.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.170 "dma_device_type": 2 00:09:35.170 } 00:09:35.171 ], 00:09:35.171 "driver_specific": {} 00:09:35.171 } 00:09:35.171 ]' 00:09:35.171 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:35.171 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:35.171 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:35.171 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:35.171 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:35.171 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:35.171 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:35.171 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:36.101 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:36.101 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:36.101 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:36.101 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:36.101 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:37.998 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:37.998 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:37.998 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:37.998 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:37.998 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:37.998 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:37.998 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:37.998 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:37.998 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:37.998 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:37.998 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:37.998 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:37.998 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:37.998 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:37.998 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:37.998 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:37.998 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:38.255 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:38.512 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:39.445 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:39.445 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:39.445 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:39.445 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.445 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:39.445 ************************************ 00:09:39.445 START TEST filesystem_ext4 00:09:39.445 ************************************ 00:09:39.445 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:39.445 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:39.445 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:39.445 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:39.445 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:39.445 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:39.445 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:39.445 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:39.445 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:39.445 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:39.445 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:39.445 mke2fs 1.47.0 (5-Feb-2023) 00:09:39.703 Discarding device blocks: 0/522240 done 00:09:39.703 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:39.703 Filesystem UUID: d7bbd908-4d6f-42bf-9081-ceeee8c1c4f5 00:09:39.703 Superblock backups stored on blocks: 00:09:39.703 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:39.703 00:09:39.703 Allocating group tables: 0/64 done 00:09:39.703 Writing inode tables: 0/64 done 00:09:39.703 Creating journal (8192 blocks): done 00:09:42.010 Writing superblocks and filesystem accounting information: 0/64 done 00:09:42.010 00:09:42.010 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:42.010 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1406922 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:48.618 00:09:48.618 real 0m8.503s 00:09:48.618 user 0m0.024s 00:09:48.618 sys 0m0.057s 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:48.618 ************************************ 00:09:48.618 END TEST filesystem_ext4 00:09:48.618 ************************************ 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:48.618 ************************************ 00:09:48.618 START TEST filesystem_btrfs 00:09:48.618 ************************************ 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:48.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:48.618 btrfs-progs v6.8.1 00:09:48.618 See https://btrfs.readthedocs.io for more information. 00:09:48.618 00:09:48.618 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:48.618 NOTE: several default settings have changed in version 5.15, please make sure 00:09:48.618 this does not affect your deployments: 00:09:48.618 - DUP for metadata (-m dup) 00:09:48.618 - enabled no-holes (-O no-holes) 00:09:48.618 - enabled free-space-tree (-R free-space-tree) 00:09:48.618 00:09:48.618 Label: (null) 00:09:48.618 UUID: 9e23cda3-ce37-4e01-9207-fcb96e271bf4 00:09:48.618 Node size: 16384 00:09:48.618 Sector size: 4096 (CPU page size: 4096) 00:09:48.618 Filesystem size: 510.00MiB 00:09:48.618 Block group profiles: 00:09:48.618 Data: single 8.00MiB 00:09:48.618 Metadata: DUP 32.00MiB 00:09:48.618 System: DUP 8.00MiB 00:09:48.618 SSD detected: yes 00:09:48.618 Zoned device: no 00:09:48.618 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:48.618 Checksum: crc32c 00:09:48.618 Number of devices: 1 00:09:48.618 Devices: 00:09:48.618 ID SIZE PATH 00:09:48.618 1 510.00MiB /dev/nvme0n1p1 00:09:48.618 00:09:48.618 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:48.618 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:48.878 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:48.878 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:48.878 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:48.878 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:48.878 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:48.878 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:49.135 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1406922 00:09:49.135 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:49.135 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:49.135 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:49.135 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:49.135 00:09:49.135 real 0m0.958s 00:09:49.135 user 0m0.023s 00:09:49.135 sys 0m0.103s 00:09:49.135 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.136 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:49.136 ************************************ 00:09:49.136 END TEST filesystem_btrfs 00:09:49.136 ************************************ 00:09:49.136 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:49.136 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:49.136 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.136 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:49.136 ************************************ 00:09:49.136 START TEST filesystem_xfs 00:09:49.136 ************************************ 00:09:49.136 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:49.136 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:49.136 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:49.136 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:49.136 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:49.136 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:49.136 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:49.136 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:09:49.136 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:49.136 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:49.136 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:49.136 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:49.136 = sectsz=512 attr=2, projid32bit=1 00:09:49.136 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:49.136 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:49.136 data = bsize=4096 blocks=130560, imaxpct=25 00:09:49.136 = sunit=0 swidth=0 blks 00:09:49.136 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:49.136 log =internal log bsize=4096 blocks=16384, version=2 00:09:49.136 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:49.136 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:50.068 Discarding blocks...Done. 00:09:50.068 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:50.068 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:52.591 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:52.591 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:52.591 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:52.591 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:52.591 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:52.591 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:52.591 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1406922 00:09:52.591 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:52.591 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:52.591 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:52.591 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:52.591 00:09:52.591 real 0m3.403s 00:09:52.591 user 0m0.017s 00:09:52.591 sys 0m0.064s 00:09:52.591 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.591 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:52.591 ************************************ 00:09:52.591 END TEST filesystem_xfs 00:09:52.591 ************************************ 00:09:52.591 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:52.849 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:52.849 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:52.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.849 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:52.849 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:52.849 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:52.849 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.849 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:52.849 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.849 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:52.849 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:52.849 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.849 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.849 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.849 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:52.849 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1406922 00:09:52.849 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1406922 ']' 00:09:52.849 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1406922 00:09:52.849 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:52.849 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.849 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1406922 00:09:53.107 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.107 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.107 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1406922' 00:09:53.107 killing process with pid 1406922 00:09:53.107 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1406922 00:09:53.107 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1406922 00:09:53.367 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:53.367 00:09:53.367 real 0m18.920s 00:09:53.367 user 1m13.379s 00:09:53.367 sys 0m2.274s 00:09:53.367 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.367 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.367 ************************************ 00:09:53.367 END TEST nvmf_filesystem_no_in_capsule 00:09:53.367 ************************************ 00:09:53.367 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:53.367 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:53.367 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.367 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:53.367 ************************************ 00:09:53.367 START TEST nvmf_filesystem_in_capsule 00:09:53.367 ************************************ 00:09:53.367 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:09:53.367 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:53.367 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:53.367 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:53.367 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:53.367 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.367 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1409422 00:09:53.367 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:53.367 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1409422 00:09:53.367 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1409422 ']' 00:09:53.367 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.367 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.367 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.367 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.367 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.626 [2024-12-09 17:59:16.451818] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:09:53.626 [2024-12-09 17:59:16.451907] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.626 [2024-12-09 17:59:16.522494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.626 [2024-12-09 17:59:16.575918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.626 [2024-12-09 17:59:16.575975] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.626 [2024-12-09 17:59:16.576002] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.626 [2024-12-09 17:59:16.576012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.626 [2024-12-09 17:59:16.576022] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.626 [2024-12-09 17:59:16.577424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.626 [2024-12-09 17:59:16.577533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.626 [2024-12-09 17:59:16.577620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.626 [2024-12-09 17:59:16.577624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.884 [2024-12-09 17:59:16.730734] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.884 Malloc1 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.884 [2024-12-09 17:59:16.914174] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.884 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:54.142 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.142 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:54.142 { 00:09:54.142 "name": "Malloc1", 00:09:54.142 "aliases": [ 00:09:54.142 "5af035e4-769b-4135-8875-155b5a9eb2c8" 00:09:54.142 ], 00:09:54.142 "product_name": "Malloc disk", 00:09:54.142 "block_size": 512, 00:09:54.142 "num_blocks": 1048576, 00:09:54.142 "uuid": "5af035e4-769b-4135-8875-155b5a9eb2c8", 00:09:54.142 "assigned_rate_limits": { 00:09:54.142 "rw_ios_per_sec": 0, 00:09:54.142 "rw_mbytes_per_sec": 0, 00:09:54.142 "r_mbytes_per_sec": 0, 00:09:54.142 "w_mbytes_per_sec": 0 00:09:54.142 }, 00:09:54.142 "claimed": true, 00:09:54.142 "claim_type": "exclusive_write", 00:09:54.142 "zoned": false, 00:09:54.142 "supported_io_types": { 00:09:54.142 "read": true, 00:09:54.142 "write": true, 00:09:54.142 "unmap": true, 00:09:54.142 "flush": true, 00:09:54.142 "reset": true, 00:09:54.142 "nvme_admin": false, 00:09:54.142 "nvme_io": false, 00:09:54.142 "nvme_io_md": false, 00:09:54.142 "write_zeroes": true, 00:09:54.142 "zcopy": true, 00:09:54.142 "get_zone_info": false, 00:09:54.142 "zone_management": false, 00:09:54.142 "zone_append": false, 00:09:54.142 "compare": false, 00:09:54.142 "compare_and_write": false, 00:09:54.142 "abort": true, 00:09:54.142 "seek_hole": false, 00:09:54.142 "seek_data": false, 00:09:54.142 "copy": true, 00:09:54.142 "nvme_iov_md": false 00:09:54.142 }, 00:09:54.142 "memory_domains": [ 00:09:54.142 { 00:09:54.142 "dma_device_id": "system", 00:09:54.142 "dma_device_type": 1 00:09:54.142 }, 00:09:54.142 { 00:09:54.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.142 "dma_device_type": 2 00:09:54.142 } 00:09:54.142 ], 00:09:54.142 "driver_specific": {} 00:09:54.142 } 00:09:54.142 ]' 00:09:54.142 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:54.142 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:54.142 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:54.142 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:54.142 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:54.142 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:54.142 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:54.142 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:54.707 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:54.707 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:54.707 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:54.707 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:54.707 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:56.604 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:56.604 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:56.604 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:56.604 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:56.604 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:56.604 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:56.604 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:56.604 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:56.604 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:56.604 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:56.604 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:56.604 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:56.604 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:56.604 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:56.604 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:56.604 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:56.604 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:56.863 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:57.427 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:58.360 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:58.360 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:58.360 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:58.360 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.360 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:58.360 ************************************ 00:09:58.360 START TEST filesystem_in_capsule_ext4 00:09:58.360 ************************************ 00:09:58.360 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:58.360 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:58.360 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:58.360 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:58.360 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:58.360 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:58.360 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:58.360 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:58.360 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:58.360 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:58.360 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:58.360 mke2fs 1.47.0 (5-Feb-2023) 00:09:58.360 Discarding device blocks: 0/522240 done 00:09:58.617 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:58.617 Filesystem UUID: 1d951956-a45f-406f-a430-97f006ae6fcb 00:09:58.617 Superblock backups stored on blocks: 00:09:58.617 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:58.617 00:09:58.617 Allocating group tables: 0/64 done 00:09:58.617 Writing inode tables: 0/64 done 00:09:59.182 Creating journal (8192 blocks): done 00:09:59.182 Writing superblocks and filesystem accounting information: 0/64 done 00:09:59.182 00:09:59.182 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:59.182 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:04.438 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:04.438 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:04.438 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:04.438 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:04.438 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:04.438 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:04.696 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1409422 00:10:04.696 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:04.696 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:04.696 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:04.696 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:04.696 00:10:04.696 real 0m6.260s 00:10:04.696 user 0m0.019s 00:10:04.696 sys 0m0.052s 00:10:04.696 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.696 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:04.696 ************************************ 00:10:04.696 END TEST filesystem_in_capsule_ext4 00:10:04.696 ************************************ 00:10:04.696 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:04.696 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:04.696 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.696 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.696 ************************************ 00:10:04.696 START TEST filesystem_in_capsule_btrfs 00:10:04.696 ************************************ 00:10:04.696 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:04.696 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:04.696 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:04.696 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:04.696 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:04.696 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:04.696 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:04.696 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:04.696 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:04.696 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:04.697 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:04.954 btrfs-progs v6.8.1 00:10:04.954 See https://btrfs.readthedocs.io for more information. 00:10:04.954 00:10:04.954 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:04.954 NOTE: several default settings have changed in version 5.15, please make sure 00:10:04.954 this does not affect your deployments: 00:10:04.954 - DUP for metadata (-m dup) 00:10:04.954 - enabled no-holes (-O no-holes) 00:10:04.954 - enabled free-space-tree (-R free-space-tree) 00:10:04.954 00:10:04.954 Label: (null) 00:10:04.954 UUID: 5a13eae2-da02-4ea4-8df4-61134aeb85af 00:10:04.954 Node size: 16384 00:10:04.954 Sector size: 4096 (CPU page size: 4096) 00:10:04.954 Filesystem size: 510.00MiB 00:10:04.954 Block group profiles: 00:10:04.954 Data: single 8.00MiB 00:10:04.954 Metadata: DUP 32.00MiB 00:10:04.954 System: DUP 8.00MiB 00:10:04.954 SSD detected: yes 00:10:04.954 Zoned device: no 00:10:04.954 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:04.954 Checksum: crc32c 00:10:04.954 Number of devices: 1 00:10:04.954 Devices: 00:10:04.954 ID SIZE PATH 00:10:04.954 1 510.00MiB /dev/nvme0n1p1 00:10:04.954 00:10:04.954 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:04.954 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1409422 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:05.212 00:10:05.212 real 0m0.542s 00:10:05.212 user 0m0.019s 00:10:05.212 sys 0m0.098s 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:05.212 ************************************ 00:10:05.212 END TEST filesystem_in_capsule_btrfs 00:10:05.212 ************************************ 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.212 ************************************ 00:10:05.212 START TEST filesystem_in_capsule_xfs 00:10:05.212 ************************************ 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:05.212 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:05.469 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:05.469 = sectsz=512 attr=2, projid32bit=1 00:10:05.469 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:05.469 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:05.469 data = bsize=4096 blocks=130560, imaxpct=25 00:10:05.469 = sunit=0 swidth=0 blks 00:10:05.469 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:05.469 log =internal log bsize=4096 blocks=16384, version=2 00:10:05.469 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:05.469 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:06.401 Discarding blocks...Done. 00:10:06.401 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:06.401 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:08.299 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:08.299 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:08.299 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:08.299 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:08.299 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:08.299 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:08.299 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1409422 00:10:08.299 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:08.299 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:08.299 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:08.299 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:08.299 00:10:08.299 real 0m2.970s 00:10:08.299 user 0m0.021s 00:10:08.299 sys 0m0.059s 00:10:08.299 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.299 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:08.299 ************************************ 00:10:08.299 END TEST filesystem_in_capsule_xfs 00:10:08.299 ************************************ 00:10:08.299 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:08.557 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:08.557 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:08.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.557 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:08.557 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:08.557 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:08.557 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:08.557 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:08.557 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:08.557 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:08.557 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:08.557 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.557 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:08.557 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.557 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:08.557 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1409422 00:10:08.557 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1409422 ']' 00:10:08.557 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1409422 00:10:08.557 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:08.557 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.557 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1409422 00:10:08.815 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.815 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.815 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1409422' 00:10:08.815 killing process with pid 1409422 00:10:08.815 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1409422 00:10:08.815 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1409422 00:10:09.073 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:09.073 00:10:09.073 real 0m15.651s 00:10:09.073 user 1m0.472s 00:10:09.073 sys 0m2.090s 00:10:09.073 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.073 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:09.073 ************************************ 00:10:09.073 END TEST nvmf_filesystem_in_capsule 00:10:09.073 ************************************ 00:10:09.073 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:09.073 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:09.073 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:09.073 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:09.073 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:09.073 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:09.073 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:09.073 rmmod nvme_tcp 00:10:09.073 rmmod nvme_fabrics 00:10:09.073 rmmod nvme_keyring 00:10:09.333 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:09.333 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:09.333 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:09.333 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:09.333 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:09.333 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:09.333 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:09.333 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:09.333 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:09.333 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:09.333 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:09.333 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:09.333 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:09.333 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.333 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.333 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.243 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:11.243 00:10:11.243 real 0m39.489s 00:10:11.243 user 2m14.949s 00:10:11.243 sys 0m6.205s 00:10:11.243 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.243 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.243 ************************************ 00:10:11.243 END TEST nvmf_filesystem 00:10:11.243 ************************************ 00:10:11.243 17:59:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:11.243 17:59:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.243 17:59:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.243 17:59:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:11.243 ************************************ 00:10:11.243 START TEST nvmf_target_discovery 00:10:11.243 ************************************ 00:10:11.243 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:11.503 * Looking for test storage... 00:10:11.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.503 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:11.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.504 --rc genhtml_branch_coverage=1 00:10:11.504 --rc genhtml_function_coverage=1 00:10:11.504 --rc genhtml_legend=1 00:10:11.504 --rc geninfo_all_blocks=1 00:10:11.504 --rc geninfo_unexecuted_blocks=1 00:10:11.504 00:10:11.504 ' 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:11.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.504 --rc genhtml_branch_coverage=1 00:10:11.504 --rc genhtml_function_coverage=1 00:10:11.504 --rc genhtml_legend=1 00:10:11.504 --rc geninfo_all_blocks=1 00:10:11.504 --rc geninfo_unexecuted_blocks=1 00:10:11.504 00:10:11.504 ' 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:11.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.504 --rc genhtml_branch_coverage=1 00:10:11.504 --rc genhtml_function_coverage=1 00:10:11.504 --rc genhtml_legend=1 00:10:11.504 --rc geninfo_all_blocks=1 00:10:11.504 --rc geninfo_unexecuted_blocks=1 00:10:11.504 00:10:11.504 ' 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:11.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.504 --rc genhtml_branch_coverage=1 00:10:11.504 --rc genhtml_function_coverage=1 00:10:11.504 --rc genhtml_legend=1 00:10:11.504 --rc geninfo_all_blocks=1 00:10:11.504 --rc geninfo_unexecuted_blocks=1 00:10:11.504 00:10:11.504 ' 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:11.504 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:14.035 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:14.035 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:14.035 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:14.035 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:14.035 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:14.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:10:14.036 00:10:14.036 --- 10.0.0.2 ping statistics --- 00:10:14.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.036 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:10:14.036 00:10:14.036 --- 10.0.0.1 ping statistics --- 00:10:14.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.036 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1413434 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1413434 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1413434 ']' 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.036 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.036 [2024-12-09 17:59:36.784165] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:10:14.036 [2024-12-09 17:59:36.784237] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.036 [2024-12-09 17:59:36.853231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.036 [2024-12-09 17:59:36.907287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.036 [2024-12-09 17:59:36.907342] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.036 [2024-12-09 17:59:36.907362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.036 [2024-12-09 17:59:36.907373] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.036 [2024-12-09 17:59:36.907382] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.036 [2024-12-09 17:59:36.909008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.036 [2024-12-09 17:59:36.909074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.036 [2024-12-09 17:59:36.909182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.036 [2024-12-09 17:59:36.909190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.036 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.036 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:14.036 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:14.036 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:14.036 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.036 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.036 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:14.036 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.036 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.036 [2024-12-09 17:59:37.057475] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.036 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.036 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:14.036 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:14.036 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:14.036 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.036 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.295 Null1 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.295 [2024-12-09 17:59:37.108824] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.295 Null2 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.295 Null3 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.295 Null4 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.295 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:10:14.554 00:10:14.554 Discovery Log Number of Records 6, Generation counter 6 00:10:14.554 =====Discovery Log Entry 0====== 00:10:14.554 trtype: tcp 00:10:14.554 adrfam: ipv4 00:10:14.554 subtype: current discovery subsystem 00:10:14.554 treq: not required 00:10:14.554 portid: 0 00:10:14.554 trsvcid: 4420 00:10:14.554 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:14.554 traddr: 10.0.0.2 00:10:14.554 eflags: explicit discovery connections, duplicate discovery information 00:10:14.554 sectype: none 00:10:14.554 =====Discovery Log Entry 1====== 00:10:14.554 trtype: tcp 00:10:14.554 adrfam: ipv4 00:10:14.554 subtype: nvme subsystem 00:10:14.554 treq: not required 00:10:14.554 portid: 0 00:10:14.554 trsvcid: 4420 00:10:14.554 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:14.554 traddr: 10.0.0.2 00:10:14.554 eflags: none 00:10:14.554 sectype: none 00:10:14.554 =====Discovery Log Entry 2====== 00:10:14.554 trtype: tcp 00:10:14.554 adrfam: ipv4 00:10:14.554 subtype: nvme subsystem 00:10:14.554 treq: not required 00:10:14.554 portid: 0 00:10:14.554 trsvcid: 4420 00:10:14.554 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:14.554 traddr: 10.0.0.2 00:10:14.554 eflags: none 00:10:14.554 sectype: none 00:10:14.554 =====Discovery Log Entry 3====== 00:10:14.554 trtype: tcp 00:10:14.554 adrfam: ipv4 00:10:14.554 subtype: nvme subsystem 00:10:14.554 treq: not required 00:10:14.554 portid: 0 00:10:14.554 trsvcid: 4420 00:10:14.554 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:14.554 traddr: 10.0.0.2 00:10:14.554 eflags: none 00:10:14.554 sectype: none 00:10:14.554 =====Discovery Log Entry 4====== 00:10:14.554 trtype: tcp 00:10:14.554 adrfam: ipv4 00:10:14.554 subtype: nvme subsystem 00:10:14.554 treq: not required 00:10:14.554 portid: 0 00:10:14.554 trsvcid: 4420 00:10:14.554 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:14.554 traddr: 10.0.0.2 00:10:14.554 eflags: none 00:10:14.554 sectype: none 00:10:14.554 =====Discovery Log Entry 5====== 00:10:14.554 trtype: tcp 00:10:14.554 adrfam: ipv4 00:10:14.554 subtype: discovery subsystem referral 00:10:14.554 treq: not required 00:10:14.554 portid: 0 00:10:14.554 trsvcid: 4430 00:10:14.554 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:14.554 traddr: 10.0.0.2 00:10:14.554 eflags: none 00:10:14.554 sectype: none 00:10:14.554 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:14.554 Perform nvmf subsystem discovery via RPC 00:10:14.554 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:14.554 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.554 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.554 [ 00:10:14.554 { 00:10:14.554 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:14.554 "subtype": "Discovery", 00:10:14.554 "listen_addresses": [ 00:10:14.554 { 00:10:14.554 "trtype": "TCP", 00:10:14.554 "adrfam": "IPv4", 00:10:14.554 "traddr": "10.0.0.2", 00:10:14.554 "trsvcid": "4420" 00:10:14.554 } 00:10:14.554 ], 00:10:14.554 "allow_any_host": true, 00:10:14.554 "hosts": [] 00:10:14.554 }, 00:10:14.554 { 00:10:14.554 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:14.554 "subtype": "NVMe", 00:10:14.554 "listen_addresses": [ 00:10:14.554 { 00:10:14.554 "trtype": "TCP", 00:10:14.554 "adrfam": "IPv4", 00:10:14.554 "traddr": "10.0.0.2", 00:10:14.554 "trsvcid": "4420" 00:10:14.554 } 00:10:14.554 ], 00:10:14.554 "allow_any_host": true, 00:10:14.554 "hosts": [], 00:10:14.554 "serial_number": "SPDK00000000000001", 00:10:14.554 "model_number": "SPDK bdev Controller", 00:10:14.554 "max_namespaces": 32, 00:10:14.554 "min_cntlid": 1, 00:10:14.554 "max_cntlid": 65519, 00:10:14.554 "namespaces": [ 00:10:14.554 { 00:10:14.554 "nsid": 1, 00:10:14.554 "bdev_name": "Null1", 00:10:14.554 "name": "Null1", 00:10:14.554 "nguid": "608BBF28870A44B6BBB6A6FD47FA962C", 00:10:14.554 "uuid": "608bbf28-870a-44b6-bbb6-a6fd47fa962c" 00:10:14.554 } 00:10:14.554 ] 00:10:14.554 }, 00:10:14.554 { 00:10:14.554 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:14.554 "subtype": "NVMe", 00:10:14.554 "listen_addresses": [ 00:10:14.554 { 00:10:14.554 "trtype": "TCP", 00:10:14.554 "adrfam": "IPv4", 00:10:14.554 "traddr": "10.0.0.2", 00:10:14.554 "trsvcid": "4420" 00:10:14.554 } 00:10:14.554 ], 00:10:14.554 "allow_any_host": true, 00:10:14.554 "hosts": [], 00:10:14.554 "serial_number": "SPDK00000000000002", 00:10:14.554 "model_number": "SPDK bdev Controller", 00:10:14.554 "max_namespaces": 32, 00:10:14.554 "min_cntlid": 1, 00:10:14.554 "max_cntlid": 65519, 00:10:14.554 "namespaces": [ 00:10:14.554 { 00:10:14.554 "nsid": 1, 00:10:14.554 "bdev_name": "Null2", 00:10:14.554 "name": "Null2", 00:10:14.554 "nguid": "574AAE6E8F144529B1B4817043A0515E", 00:10:14.554 "uuid": "574aae6e-8f14-4529-b1b4-817043a0515e" 00:10:14.554 } 00:10:14.554 ] 00:10:14.554 }, 00:10:14.554 { 00:10:14.554 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:14.554 "subtype": "NVMe", 00:10:14.554 "listen_addresses": [ 00:10:14.554 { 00:10:14.554 "trtype": "TCP", 00:10:14.554 "adrfam": "IPv4", 00:10:14.554 "traddr": "10.0.0.2", 00:10:14.554 "trsvcid": "4420" 00:10:14.554 } 00:10:14.554 ], 00:10:14.554 "allow_any_host": true, 00:10:14.554 "hosts": [], 00:10:14.554 "serial_number": "SPDK00000000000003", 00:10:14.554 "model_number": "SPDK bdev Controller", 00:10:14.554 "max_namespaces": 32, 00:10:14.554 "min_cntlid": 1, 00:10:14.554 "max_cntlid": 65519, 00:10:14.554 "namespaces": [ 00:10:14.554 { 00:10:14.554 "nsid": 1, 00:10:14.554 "bdev_name": "Null3", 00:10:14.554 "name": "Null3", 00:10:14.554 "nguid": "65A95E3FCB81422A94A37B88AF3EE5A5", 00:10:14.554 "uuid": "65a95e3f-cb81-422a-94a3-7b88af3ee5a5" 00:10:14.554 } 00:10:14.554 ] 00:10:14.554 }, 00:10:14.554 { 00:10:14.554 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:14.554 "subtype": "NVMe", 00:10:14.554 "listen_addresses": [ 00:10:14.554 { 00:10:14.554 "trtype": "TCP", 00:10:14.554 "adrfam": "IPv4", 00:10:14.554 "traddr": "10.0.0.2", 00:10:14.554 "trsvcid": "4420" 00:10:14.554 } 00:10:14.554 ], 00:10:14.554 "allow_any_host": true, 00:10:14.554 "hosts": [], 00:10:14.554 "serial_number": "SPDK00000000000004", 00:10:14.554 "model_number": "SPDK bdev Controller", 00:10:14.554 "max_namespaces": 32, 00:10:14.554 "min_cntlid": 1, 00:10:14.554 "max_cntlid": 65519, 00:10:14.554 "namespaces": [ 00:10:14.554 { 00:10:14.554 "nsid": 1, 00:10:14.554 "bdev_name": "Null4", 00:10:14.554 "name": "Null4", 00:10:14.554 "nguid": "E1BCAC75094E475CB4984C00014A04AB", 00:10:14.554 "uuid": "e1bcac75-094e-475c-b498-4c00014a04ab" 00:10:14.554 } 00:10:14.554 ] 00:10:14.554 } 00:10:14.554 ] 00:10:14.554 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.554 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:14.554 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:14.554 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:14.554 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.554 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.554 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.554 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:14.554 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.554 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.554 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.554 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:14.555 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:14.555 rmmod nvme_tcp 00:10:14.555 rmmod nvme_fabrics 00:10:14.555 rmmod nvme_keyring 00:10:14.813 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:14.813 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:14.813 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:14.813 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1413434 ']' 00:10:14.813 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1413434 00:10:14.813 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1413434 ']' 00:10:14.813 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1413434 00:10:14.813 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:14.813 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.813 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1413434 00:10:14.813 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.813 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.813 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1413434' 00:10:14.813 killing process with pid 1413434 00:10:14.813 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1413434 00:10:14.813 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1413434 00:10:15.071 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:15.071 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:15.071 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:15.071 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:15.071 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:15.071 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:15.071 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:15.071 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:15.071 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:15.071 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.071 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.071 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.979 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:16.979 00:10:16.979 real 0m5.687s 00:10:16.979 user 0m4.812s 00:10:16.979 sys 0m1.941s 00:10:16.979 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.979 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.979 ************************************ 00:10:16.979 END TEST nvmf_target_discovery 00:10:16.979 ************************************ 00:10:16.979 17:59:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:16.979 17:59:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:16.979 17:59:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.979 17:59:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:16.979 ************************************ 00:10:16.979 START TEST nvmf_referrals 00:10:16.979 ************************************ 00:10:16.979 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:17.238 * Looking for test storage... 00:10:17.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:17.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.238 --rc genhtml_branch_coverage=1 00:10:17.238 --rc genhtml_function_coverage=1 00:10:17.238 --rc genhtml_legend=1 00:10:17.238 --rc geninfo_all_blocks=1 00:10:17.238 --rc geninfo_unexecuted_blocks=1 00:10:17.238 00:10:17.238 ' 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:17.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.238 --rc genhtml_branch_coverage=1 00:10:17.238 --rc genhtml_function_coverage=1 00:10:17.238 --rc genhtml_legend=1 00:10:17.238 --rc geninfo_all_blocks=1 00:10:17.238 --rc geninfo_unexecuted_blocks=1 00:10:17.238 00:10:17.238 ' 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:17.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.238 --rc genhtml_branch_coverage=1 00:10:17.238 --rc genhtml_function_coverage=1 00:10:17.238 --rc genhtml_legend=1 00:10:17.238 --rc geninfo_all_blocks=1 00:10:17.238 --rc geninfo_unexecuted_blocks=1 00:10:17.238 00:10:17.238 ' 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:17.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.238 --rc genhtml_branch_coverage=1 00:10:17.238 --rc genhtml_function_coverage=1 00:10:17.238 --rc genhtml_legend=1 00:10:17.238 --rc geninfo_all_blocks=1 00:10:17.238 --rc geninfo_unexecuted_blocks=1 00:10:17.238 00:10:17.238 ' 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.238 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:17.239 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:19.770 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:19.770 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:19.770 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:19.770 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:19.770 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:19.770 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:19.770 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:19.770 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:19.770 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:19.770 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:19.770 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:19.770 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:19.771 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:19.771 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:19.771 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:19.771 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:19.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:10:19.771 00:10:19.771 --- 10.0.0.2 ping statistics --- 00:10:19.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.771 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:19.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:10:19.771 00:10:19.771 --- 10.0.0.1 ping statistics --- 00:10:19.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.771 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1415536 00:10:19.771 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1415536 00:10:19.772 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:19.772 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1415536 ']' 00:10:19.772 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.772 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.772 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.772 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.772 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:19.772 [2024-12-09 17:59:42.495939] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:10:19.772 [2024-12-09 17:59:42.496031] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.772 [2024-12-09 17:59:42.571962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:19.772 [2024-12-09 17:59:42.631806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.772 [2024-12-09 17:59:42.631884] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.772 [2024-12-09 17:59:42.631897] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:19.772 [2024-12-09 17:59:42.631908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:19.772 [2024-12-09 17:59:42.631917] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.772 [2024-12-09 17:59:42.633576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.772 [2024-12-09 17:59:42.633639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:19.772 [2024-12-09 17:59:42.633662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:19.772 [2024-12-09 17:59:42.633671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.772 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.772 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:19.772 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:19.772 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:19.772 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:19.772 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.772 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:19.772 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.772 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:19.772 [2024-12-09 17:59:42.792731] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.772 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.772 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:19.772 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.772 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.030 [2024-12-09 17:59:42.815765] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:20.030 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:20.287 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:20.287 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:20.287 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:20.287 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.287 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.287 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.287 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:20.287 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.287 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.287 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.287 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:20.287 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.287 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.287 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.287 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:20.287 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.287 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:20.287 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.287 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.287 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:20.287 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:20.287 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:20.288 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:20.288 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:20.288 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:20.288 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:20.545 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:20.802 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:20.802 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:20.802 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:20.802 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:20.802 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:20.802 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:20.802 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:21.061 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:21.061 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:21.061 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:21.061 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:21.061 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:21.061 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:21.061 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:21.061 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:21.061 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.061 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:21.061 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.061 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:21.061 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:21.061 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:21.061 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:21.061 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.061 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:21.061 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:21.061 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.061 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:21.061 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:21.061 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:21.061 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:21.061 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:21.061 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:21.061 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:21.061 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:21.319 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:21.319 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:21.319 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:21.319 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:21.319 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:21.319 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:21.319 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:21.576 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:21.576 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:21.576 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:21.576 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:21.576 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:21.576 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:21.576 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:21.576 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:21.576 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.576 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:21.576 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.576 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:21.576 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:21.576 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.576 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:21.576 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.576 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:21.576 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:21.576 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:21.576 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:21.834 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:21.834 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:21.834 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:21.834 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:21.834 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:21.834 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:21.834 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:21.834 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:21.834 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:21.834 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:21.834 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:21.834 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:21.834 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:21.834 rmmod nvme_tcp 00:10:21.834 rmmod nvme_fabrics 00:10:21.834 rmmod nvme_keyring 00:10:22.092 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:22.092 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:22.092 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:22.092 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1415536 ']' 00:10:22.092 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1415536 00:10:22.092 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1415536 ']' 00:10:22.092 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1415536 00:10:22.092 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:22.092 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.092 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1415536 00:10:22.092 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.092 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.092 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1415536' 00:10:22.092 killing process with pid 1415536 00:10:22.092 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1415536 00:10:22.092 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1415536 00:10:22.352 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:22.352 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:22.352 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:22.352 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:22.352 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:22.352 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:22.352 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:22.352 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:22.352 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:22.352 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.352 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.352 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.257 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:24.257 00:10:24.257 real 0m7.227s 00:10:24.257 user 0m11.652s 00:10:24.257 sys 0m2.342s 00:10:24.257 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.257 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:24.257 ************************************ 00:10:24.257 END TEST nvmf_referrals 00:10:24.257 ************************************ 00:10:24.257 17:59:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:24.257 17:59:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:24.257 17:59:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.257 17:59:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:24.257 ************************************ 00:10:24.257 START TEST nvmf_connect_disconnect 00:10:24.257 ************************************ 00:10:24.257 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:24.257 * Looking for test storage... 00:10:24.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:24.257 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.519 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:24.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.520 --rc genhtml_branch_coverage=1 00:10:24.520 --rc genhtml_function_coverage=1 00:10:24.520 --rc genhtml_legend=1 00:10:24.520 --rc geninfo_all_blocks=1 00:10:24.520 --rc geninfo_unexecuted_blocks=1 00:10:24.520 00:10:24.520 ' 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:24.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.520 --rc genhtml_branch_coverage=1 00:10:24.520 --rc genhtml_function_coverage=1 00:10:24.520 --rc genhtml_legend=1 00:10:24.520 --rc geninfo_all_blocks=1 00:10:24.520 --rc geninfo_unexecuted_blocks=1 00:10:24.520 00:10:24.520 ' 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:24.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.520 --rc genhtml_branch_coverage=1 00:10:24.520 --rc genhtml_function_coverage=1 00:10:24.520 --rc genhtml_legend=1 00:10:24.520 --rc geninfo_all_blocks=1 00:10:24.520 --rc geninfo_unexecuted_blocks=1 00:10:24.520 00:10:24.520 ' 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:24.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.520 --rc genhtml_branch_coverage=1 00:10:24.520 --rc genhtml_function_coverage=1 00:10:24.520 --rc genhtml_legend=1 00:10:24.520 --rc geninfo_all_blocks=1 00:10:24.520 --rc geninfo_unexecuted_blocks=1 00:10:24.520 00:10:24.520 ' 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:24.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:24.520 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:27.083 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.083 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:27.084 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:27.084 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:27.084 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:27.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:27.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:10:27.084 00:10:27.084 --- 10.0.0.2 ping statistics --- 00:10:27.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.084 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:27.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:27.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:10:27.084 00:10:27.084 --- 10.0.0.1 ping statistics --- 00:10:27.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.084 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1417841 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1417841 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1417841 ']' 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.084 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:27.084 [2024-12-09 17:59:49.755650] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:10:27.084 [2024-12-09 17:59:49.755750] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.084 [2024-12-09 17:59:49.828692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:27.084 [2024-12-09 17:59:49.884428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.084 [2024-12-09 17:59:49.884487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.084 [2024-12-09 17:59:49.884510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.084 [2024-12-09 17:59:49.884520] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.084 [2024-12-09 17:59:49.884549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.084 [2024-12-09 17:59:49.886118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.084 [2024-12-09 17:59:49.886185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.084 [2024-12-09 17:59:49.886299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.084 [2024-12-09 17:59:49.886296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:27.085 [2024-12-09 17:59:50.026057] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:27.085 [2024-12-09 17:59:50.094831] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:27.085 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:30.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.217 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:41.217 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:41.217 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.217 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:10:41.217 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.217 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:10:41.217 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.217 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.217 rmmod nvme_tcp 00:10:41.217 rmmod nvme_fabrics 00:10:41.217 rmmod nvme_keyring 00:10:41.217 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.217 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:10:41.217 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:10:41.217 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1417841 ']' 00:10:41.217 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1417841 00:10:41.217 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1417841 ']' 00:10:41.217 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1417841 00:10:41.217 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:10:41.217 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.217 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1417841 00:10:41.217 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.217 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.218 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1417841' 00:10:41.218 killing process with pid 1417841 00:10:41.218 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1417841 00:10:41.218 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1417841 00:10:41.218 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:41.218 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:41.218 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:41.218 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:10:41.218 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:10:41.218 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:41.218 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:10:41.218 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.218 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:41.218 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.218 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.218 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:43.755 00:10:43.755 real 0m19.026s 00:10:43.755 user 0m56.954s 00:10:43.755 sys 0m3.375s 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:43.755 ************************************ 00:10:43.755 END TEST nvmf_connect_disconnect 00:10:43.755 ************************************ 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:43.755 ************************************ 00:10:43.755 START TEST nvmf_multitarget 00:10:43.755 ************************************ 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:43.755 * Looking for test storage... 00:10:43.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:43.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.755 --rc genhtml_branch_coverage=1 00:10:43.755 --rc genhtml_function_coverage=1 00:10:43.755 --rc genhtml_legend=1 00:10:43.755 --rc geninfo_all_blocks=1 00:10:43.755 --rc geninfo_unexecuted_blocks=1 00:10:43.755 00:10:43.755 ' 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:43.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.755 --rc genhtml_branch_coverage=1 00:10:43.755 --rc genhtml_function_coverage=1 00:10:43.755 --rc genhtml_legend=1 00:10:43.755 --rc geninfo_all_blocks=1 00:10:43.755 --rc geninfo_unexecuted_blocks=1 00:10:43.755 00:10:43.755 ' 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:43.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.755 --rc genhtml_branch_coverage=1 00:10:43.755 --rc genhtml_function_coverage=1 00:10:43.755 --rc genhtml_legend=1 00:10:43.755 --rc geninfo_all_blocks=1 00:10:43.755 --rc geninfo_unexecuted_blocks=1 00:10:43.755 00:10:43.755 ' 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:43.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.755 --rc genhtml_branch_coverage=1 00:10:43.755 --rc genhtml_function_coverage=1 00:10:43.755 --rc genhtml_legend=1 00:10:43.755 --rc geninfo_all_blocks=1 00:10:43.755 --rc geninfo_unexecuted_blocks=1 00:10:43.755 00:10:43.755 ' 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.755 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:10:43.756 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:45.657 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:45.657 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:10:45.657 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:45.657 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:45.657 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:45.657 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:45.657 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:45.657 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:10:45.657 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:45.657 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:10:45.657 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:10:45.657 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:10:45.657 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:10:45.657 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:10:45.657 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:10:45.657 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:45.657 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:45.657 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:45.658 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:45.658 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:45.658 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:45.658 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:45.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:10:45.658 00:10:45.658 --- 10.0.0.2 ping statistics --- 00:10:45.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.658 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:45.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:10:45.658 00:10:45.658 --- 10.0.0.1 ping statistics --- 00:10:45.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.658 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1422141 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1422141 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1422141 ']' 00:10:45.658 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.659 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.659 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.659 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.659 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:45.659 [2024-12-09 18:00:08.690276] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:10:45.659 [2024-12-09 18:00:08.690359] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.917 [2024-12-09 18:00:08.764803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:45.917 [2024-12-09 18:00:08.824378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.917 [2024-12-09 18:00:08.824440] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.917 [2024-12-09 18:00:08.824452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.917 [2024-12-09 18:00:08.824463] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.917 [2024-12-09 18:00:08.824472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.917 [2024-12-09 18:00:08.826103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.917 [2024-12-09 18:00:08.826167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.917 [2024-12-09 18:00:08.826234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:45.917 [2024-12-09 18:00:08.826237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.917 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.917 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:10:45.917 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:45.917 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:45.917 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:46.175 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.175 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:46.175 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:46.175 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:46.175 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:46.175 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:46.175 "nvmf_tgt_1" 00:10:46.175 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:46.432 "nvmf_tgt_2" 00:10:46.432 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:46.432 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:46.432 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:46.433 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:46.690 true 00:10:46.691 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:46.691 true 00:10:46.691 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:46.691 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:46.949 rmmod nvme_tcp 00:10:46.949 rmmod nvme_fabrics 00:10:46.949 rmmod nvme_keyring 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1422141 ']' 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1422141 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1422141 ']' 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1422141 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1422141 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1422141' 00:10:46.949 killing process with pid 1422141 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1422141 00:10:46.949 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1422141 00:10:47.207 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:47.207 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:47.207 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:47.207 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:10:47.207 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:47.207 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:10:47.207 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:10:47.207 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:47.207 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:47.207 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.207 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.207 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:49.748 00:10:49.748 real 0m5.844s 00:10:49.748 user 0m6.803s 00:10:49.748 sys 0m1.943s 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:49.748 ************************************ 00:10:49.748 END TEST nvmf_multitarget 00:10:49.748 ************************************ 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:49.748 ************************************ 00:10:49.748 START TEST nvmf_rpc 00:10:49.748 ************************************ 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:49.748 * Looking for test storage... 00:10:49.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:49.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.748 --rc genhtml_branch_coverage=1 00:10:49.748 --rc genhtml_function_coverage=1 00:10:49.748 --rc genhtml_legend=1 00:10:49.748 --rc geninfo_all_blocks=1 00:10:49.748 --rc geninfo_unexecuted_blocks=1 00:10:49.748 00:10:49.748 ' 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:49.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.748 --rc genhtml_branch_coverage=1 00:10:49.748 --rc genhtml_function_coverage=1 00:10:49.748 --rc genhtml_legend=1 00:10:49.748 --rc geninfo_all_blocks=1 00:10:49.748 --rc geninfo_unexecuted_blocks=1 00:10:49.748 00:10:49.748 ' 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:49.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.748 --rc genhtml_branch_coverage=1 00:10:49.748 --rc genhtml_function_coverage=1 00:10:49.748 --rc genhtml_legend=1 00:10:49.748 --rc geninfo_all_blocks=1 00:10:49.748 --rc geninfo_unexecuted_blocks=1 00:10:49.748 00:10:49.748 ' 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:49.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.748 --rc genhtml_branch_coverage=1 00:10:49.748 --rc genhtml_function_coverage=1 00:10:49.748 --rc genhtml_legend=1 00:10:49.748 --rc geninfo_all_blocks=1 00:10:49.748 --rc geninfo_unexecuted_blocks=1 00:10:49.748 00:10:49.748 ' 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.748 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:49.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:10:49.749 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:51.655 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:51.656 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:51.656 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:51.656 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:51.656 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:51.656 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:51.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:10:51.915 00:10:51.915 --- 10.0.0.2 ping statistics --- 00:10:51.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.915 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:10:51.915 00:10:51.915 --- 10.0.0.1 ping statistics --- 00:10:51.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.915 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1424337 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1424337 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1424337 ']' 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.915 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.915 [2024-12-09 18:00:14.817380] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:10:51.915 [2024-12-09 18:00:14.817469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.915 [2024-12-09 18:00:14.891378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.915 [2024-12-09 18:00:14.952013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.915 [2024-12-09 18:00:14.952078] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.915 [2024-12-09 18:00:14.952095] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.915 [2024-12-09 18:00:14.952106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.915 [2024-12-09 18:00:14.952115] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.915 [2024-12-09 18:00:14.953745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.915 [2024-12-09 18:00:14.953829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.915 [2024-12-09 18:00:14.953806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.915 [2024-12-09 18:00:14.953842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:52.175 "tick_rate": 2700000000, 00:10:52.175 "poll_groups": [ 00:10:52.175 { 00:10:52.175 "name": "nvmf_tgt_poll_group_000", 00:10:52.175 "admin_qpairs": 0, 00:10:52.175 "io_qpairs": 0, 00:10:52.175 "current_admin_qpairs": 0, 00:10:52.175 "current_io_qpairs": 0, 00:10:52.175 "pending_bdev_io": 0, 00:10:52.175 "completed_nvme_io": 0, 00:10:52.175 "transports": [] 00:10:52.175 }, 00:10:52.175 { 00:10:52.175 "name": "nvmf_tgt_poll_group_001", 00:10:52.175 "admin_qpairs": 0, 00:10:52.175 "io_qpairs": 0, 00:10:52.175 "current_admin_qpairs": 0, 00:10:52.175 "current_io_qpairs": 0, 00:10:52.175 "pending_bdev_io": 0, 00:10:52.175 "completed_nvme_io": 0, 00:10:52.175 "transports": [] 00:10:52.175 }, 00:10:52.175 { 00:10:52.175 "name": "nvmf_tgt_poll_group_002", 00:10:52.175 "admin_qpairs": 0, 00:10:52.175 "io_qpairs": 0, 00:10:52.175 "current_admin_qpairs": 0, 00:10:52.175 "current_io_qpairs": 0, 00:10:52.175 "pending_bdev_io": 0, 00:10:52.175 "completed_nvme_io": 0, 00:10:52.175 "transports": [] 00:10:52.175 }, 00:10:52.175 { 00:10:52.175 "name": "nvmf_tgt_poll_group_003", 00:10:52.175 "admin_qpairs": 0, 00:10:52.175 "io_qpairs": 0, 00:10:52.175 "current_admin_qpairs": 0, 00:10:52.175 "current_io_qpairs": 0, 00:10:52.175 "pending_bdev_io": 0, 00:10:52.175 "completed_nvme_io": 0, 00:10:52.175 "transports": [] 00:10:52.175 } 00:10:52.175 ] 00:10:52.175 }' 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.175 [2024-12-09 18:00:15.195181] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.175 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:52.435 "tick_rate": 2700000000, 00:10:52.435 "poll_groups": [ 00:10:52.435 { 00:10:52.435 "name": "nvmf_tgt_poll_group_000", 00:10:52.435 "admin_qpairs": 0, 00:10:52.435 "io_qpairs": 0, 00:10:52.435 "current_admin_qpairs": 0, 00:10:52.435 "current_io_qpairs": 0, 00:10:52.435 "pending_bdev_io": 0, 00:10:52.435 "completed_nvme_io": 0, 00:10:52.435 "transports": [ 00:10:52.435 { 00:10:52.435 "trtype": "TCP" 00:10:52.435 } 00:10:52.435 ] 00:10:52.435 }, 00:10:52.435 { 00:10:52.435 "name": "nvmf_tgt_poll_group_001", 00:10:52.435 "admin_qpairs": 0, 00:10:52.435 "io_qpairs": 0, 00:10:52.435 "current_admin_qpairs": 0, 00:10:52.435 "current_io_qpairs": 0, 00:10:52.435 "pending_bdev_io": 0, 00:10:52.435 "completed_nvme_io": 0, 00:10:52.435 "transports": [ 00:10:52.435 { 00:10:52.435 "trtype": "TCP" 00:10:52.435 } 00:10:52.435 ] 00:10:52.435 }, 00:10:52.435 { 00:10:52.435 "name": "nvmf_tgt_poll_group_002", 00:10:52.435 "admin_qpairs": 0, 00:10:52.435 "io_qpairs": 0, 00:10:52.435 "current_admin_qpairs": 0, 00:10:52.435 "current_io_qpairs": 0, 00:10:52.435 "pending_bdev_io": 0, 00:10:52.435 "completed_nvme_io": 0, 00:10:52.435 "transports": [ 00:10:52.435 { 00:10:52.435 "trtype": "TCP" 00:10:52.435 } 00:10:52.435 ] 00:10:52.435 }, 00:10:52.435 { 00:10:52.435 "name": "nvmf_tgt_poll_group_003", 00:10:52.435 "admin_qpairs": 0, 00:10:52.435 "io_qpairs": 0, 00:10:52.435 "current_admin_qpairs": 0, 00:10:52.435 "current_io_qpairs": 0, 00:10:52.435 "pending_bdev_io": 0, 00:10:52.435 "completed_nvme_io": 0, 00:10:52.435 "transports": [ 00:10:52.435 { 00:10:52.435 "trtype": "TCP" 00:10:52.435 } 00:10:52.435 ] 00:10:52.435 } 00:10:52.435 ] 00:10:52.435 }' 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.435 Malloc1 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.435 [2024-12-09 18:00:15.347428] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:10:52.435 [2024-12-09 18:00:15.370003] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:10:52.435 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:52.435 could not add new controller: failed to write to nvme-fabrics device 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.435 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:53.000 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:53.000 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:53.000 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:53.000 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:53.000 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:55.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:55.528 [2024-12-09 18:00:18.150777] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:10:55.528 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:55.528 could not add new controller: failed to write to nvme-fabrics device 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.528 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:56.094 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:56.094 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:56.094 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:56.094 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:56.094 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:57.991 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:57.991 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:57.991 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:57.991 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:57.991 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:57.991 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:57.991 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:58.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.249 [2024-12-09 18:00:21.126744] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.249 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:58.815 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:58.815 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:58.815 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:58.815 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:58.815 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:01.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.342 [2024-12-09 18:00:23.992063] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.342 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.342 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.342 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:01.342 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.342 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.342 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.342 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:01.908 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:01.908 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:01.908 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:01.908 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:01.908 18:00:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:03.807 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:03.807 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:03.807 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:03.807 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:03.807 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:03.807 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:03.807 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:03.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.807 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:03.807 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:03.807 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:03.807 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:03.807 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:03.807 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:03.807 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:03.807 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:03.807 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.807 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.807 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.807 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:03.807 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.807 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.065 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.065 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:04.065 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:04.065 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.065 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.065 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.065 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.065 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.065 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.065 [2024-12-09 18:00:26.863635] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.065 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.065 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:04.065 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.065 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.065 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.065 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:04.065 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.065 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.065 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.065 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:04.630 18:00:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:04.630 18:00:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:04.630 18:00:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:04.630 18:00:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:04.630 18:00:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:06.560 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:06.560 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:06.560 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:06.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.848 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.848 [2024-12-09 18:00:29.719087] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.849 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.849 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:06.849 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.849 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.849 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.849 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:06.849 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.849 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.849 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.849 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:07.415 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:07.415 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:07.415 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:07.415 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:07.415 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:09.312 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:09.312 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:09.312 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.312 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:09.312 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.312 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:09.312 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:09.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.570 [2024-12-09 18:00:32.487304] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.570 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:10.136 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:10.136 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:10.136 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:10.136 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:10.136 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:12.663 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:12.663 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:12.663 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:12.663 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:12.663 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:12.663 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:12.663 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:12.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.663 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:12.663 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:12.663 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 [2024-12-09 18:00:35.266349] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 [2024-12-09 18:00:35.314408] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 [2024-12-09 18:00:35.362592] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 [2024-12-09 18:00:35.410767] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.664 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.665 [2024-12-09 18:00:35.458929] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:12.665 "tick_rate": 2700000000, 00:11:12.665 "poll_groups": [ 00:11:12.665 { 00:11:12.665 "name": "nvmf_tgt_poll_group_000", 00:11:12.665 "admin_qpairs": 2, 00:11:12.665 "io_qpairs": 84, 00:11:12.665 "current_admin_qpairs": 0, 00:11:12.665 "current_io_qpairs": 0, 00:11:12.665 "pending_bdev_io": 0, 00:11:12.665 "completed_nvme_io": 184, 00:11:12.665 "transports": [ 00:11:12.665 { 00:11:12.665 "trtype": "TCP" 00:11:12.665 } 00:11:12.665 ] 00:11:12.665 }, 00:11:12.665 { 00:11:12.665 "name": "nvmf_tgt_poll_group_001", 00:11:12.665 "admin_qpairs": 2, 00:11:12.665 "io_qpairs": 84, 00:11:12.665 "current_admin_qpairs": 0, 00:11:12.665 "current_io_qpairs": 0, 00:11:12.665 "pending_bdev_io": 0, 00:11:12.665 "completed_nvme_io": 180, 00:11:12.665 "transports": [ 00:11:12.665 { 00:11:12.665 "trtype": "TCP" 00:11:12.665 } 00:11:12.665 ] 00:11:12.665 }, 00:11:12.665 { 00:11:12.665 "name": "nvmf_tgt_poll_group_002", 00:11:12.665 "admin_qpairs": 1, 00:11:12.665 "io_qpairs": 84, 00:11:12.665 "current_admin_qpairs": 0, 00:11:12.665 "current_io_qpairs": 0, 00:11:12.665 "pending_bdev_io": 0, 00:11:12.665 "completed_nvme_io": 189, 00:11:12.665 "transports": [ 00:11:12.665 { 00:11:12.665 "trtype": "TCP" 00:11:12.665 } 00:11:12.665 ] 00:11:12.665 }, 00:11:12.665 { 00:11:12.665 "name": "nvmf_tgt_poll_group_003", 00:11:12.665 "admin_qpairs": 2, 00:11:12.665 "io_qpairs": 84, 00:11:12.665 "current_admin_qpairs": 0, 00:11:12.665 "current_io_qpairs": 0, 00:11:12.665 "pending_bdev_io": 0, 00:11:12.665 "completed_nvme_io": 133, 00:11:12.665 "transports": [ 00:11:12.665 { 00:11:12.665 "trtype": "TCP" 00:11:12.665 } 00:11:12.665 ] 00:11:12.665 } 00:11:12.665 ] 00:11:12.665 }' 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:12.665 rmmod nvme_tcp 00:11:12.665 rmmod nvme_fabrics 00:11:12.665 rmmod nvme_keyring 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1424337 ']' 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1424337 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1424337 ']' 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1424337 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1424337 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1424337' 00:11:12.665 killing process with pid 1424337 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1424337 00:11:12.665 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1424337 00:11:12.925 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:12.925 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:12.925 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:12.925 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:12.925 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:12.925 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:12.925 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:12.925 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:12.925 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:12.925 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.925 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.925 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.464 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:15.464 00:11:15.464 real 0m25.746s 00:11:15.464 user 1m23.434s 00:11:15.464 sys 0m4.177s 00:11:15.464 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.464 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.464 ************************************ 00:11:15.464 END TEST nvmf_rpc 00:11:15.464 ************************************ 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:15.464 ************************************ 00:11:15.464 START TEST nvmf_invalid 00:11:15.464 ************************************ 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:15.464 * Looking for test storage... 00:11:15.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:15.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.464 --rc genhtml_branch_coverage=1 00:11:15.464 --rc genhtml_function_coverage=1 00:11:15.464 --rc genhtml_legend=1 00:11:15.464 --rc geninfo_all_blocks=1 00:11:15.464 --rc geninfo_unexecuted_blocks=1 00:11:15.464 00:11:15.464 ' 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:15.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.464 --rc genhtml_branch_coverage=1 00:11:15.464 --rc genhtml_function_coverage=1 00:11:15.464 --rc genhtml_legend=1 00:11:15.464 --rc geninfo_all_blocks=1 00:11:15.464 --rc geninfo_unexecuted_blocks=1 00:11:15.464 00:11:15.464 ' 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:15.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.464 --rc genhtml_branch_coverage=1 00:11:15.464 --rc genhtml_function_coverage=1 00:11:15.464 --rc genhtml_legend=1 00:11:15.464 --rc geninfo_all_blocks=1 00:11:15.464 --rc geninfo_unexecuted_blocks=1 00:11:15.464 00:11:15.464 ' 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:15.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.464 --rc genhtml_branch_coverage=1 00:11:15.464 --rc genhtml_function_coverage=1 00:11:15.464 --rc genhtml_legend=1 00:11:15.464 --rc geninfo_all_blocks=1 00:11:15.464 --rc geninfo_unexecuted_blocks=1 00:11:15.464 00:11:15.464 ' 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.464 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:15.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:11:15.465 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:18.002 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:18.003 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:18.003 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:18.003 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:18.003 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:18.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:11:18.003 00:11:18.003 --- 10.0.0.2 ping statistics --- 00:11:18.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.003 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:11:18.003 00:11:18.003 --- 10.0.0.1 ping statistics --- 00:11:18.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.003 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1428958 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1428958 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1428958 ']' 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.003 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:18.003 [2024-12-09 18:00:40.665424] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:11:18.003 [2024-12-09 18:00:40.665510] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.003 [2024-12-09 18:00:40.740130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:18.003 [2024-12-09 18:00:40.798360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.003 [2024-12-09 18:00:40.798428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.003 [2024-12-09 18:00:40.798442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.004 [2024-12-09 18:00:40.798452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.004 [2024-12-09 18:00:40.798461] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.004 [2024-12-09 18:00:40.800045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.004 [2024-12-09 18:00:40.800125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.004 [2024-12-09 18:00:40.800182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:18.004 [2024-12-09 18:00:40.800185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.004 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.004 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:11:18.004 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:18.004 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:18.004 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:18.004 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.004 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:18.004 18:00:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode14330 00:11:18.261 [2024-12-09 18:00:41.194990] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:18.261 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:18.261 { 00:11:18.261 "nqn": "nqn.2016-06.io.spdk:cnode14330", 00:11:18.261 "tgt_name": "foobar", 00:11:18.261 "method": "nvmf_create_subsystem", 00:11:18.261 "req_id": 1 00:11:18.261 } 00:11:18.261 Got JSON-RPC error response 00:11:18.261 response: 00:11:18.261 { 00:11:18.261 "code": -32603, 00:11:18.261 "message": "Unable to find target foobar" 00:11:18.261 }' 00:11:18.261 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:18.261 { 00:11:18.261 "nqn": "nqn.2016-06.io.spdk:cnode14330", 00:11:18.261 "tgt_name": "foobar", 00:11:18.261 "method": "nvmf_create_subsystem", 00:11:18.261 "req_id": 1 00:11:18.261 } 00:11:18.261 Got JSON-RPC error response 00:11:18.261 response: 00:11:18.261 { 00:11:18.261 "code": -32603, 00:11:18.261 "message": "Unable to find target foobar" 00:11:18.261 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:18.261 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:18.261 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3373 00:11:18.518 [2024-12-09 18:00:41.479991] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3373: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:18.518 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:18.518 { 00:11:18.518 "nqn": "nqn.2016-06.io.spdk:cnode3373", 00:11:18.518 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:18.518 "method": "nvmf_create_subsystem", 00:11:18.518 "req_id": 1 00:11:18.518 } 00:11:18.518 Got JSON-RPC error response 00:11:18.518 response: 00:11:18.518 { 00:11:18.518 "code": -32602, 00:11:18.518 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:18.518 }' 00:11:18.518 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:18.518 { 00:11:18.518 "nqn": "nqn.2016-06.io.spdk:cnode3373", 00:11:18.518 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:18.518 "method": "nvmf_create_subsystem", 00:11:18.518 "req_id": 1 00:11:18.518 } 00:11:18.518 Got JSON-RPC error response 00:11:18.518 response: 00:11:18.518 { 00:11:18.518 "code": -32602, 00:11:18.518 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:18.518 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:18.518 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:18.518 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode3496 00:11:18.775 [2024-12-09 18:00:41.748822] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3496: invalid model number 'SPDK_Controller' 00:11:18.775 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:18.775 { 00:11:18.776 "nqn": "nqn.2016-06.io.spdk:cnode3496", 00:11:18.776 "model_number": "SPDK_Controller\u001f", 00:11:18.776 "method": "nvmf_create_subsystem", 00:11:18.776 "req_id": 1 00:11:18.776 } 00:11:18.776 Got JSON-RPC error response 00:11:18.776 response: 00:11:18.776 { 00:11:18.776 "code": -32602, 00:11:18.776 "message": "Invalid MN SPDK_Controller\u001f" 00:11:18.776 }' 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:18.776 { 00:11:18.776 "nqn": "nqn.2016-06.io.spdk:cnode3496", 00:11:18.776 "model_number": "SPDK_Controller\u001f", 00:11:18.776 "method": "nvmf_create_subsystem", 00:11:18.776 "req_id": 1 00:11:18.776 } 00:11:18.776 Got JSON-RPC error response 00:11:18.776 response: 00:11:18.776 { 00:11:18.776 "code": -32602, 00:11:18.776 "message": "Invalid MN SPDK_Controller\u001f" 00:11:18.776 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.776 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ k == \- ]] 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'kd2^xa5$yvdx]]KB]"| B' 00:11:19.034 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'kd2^xa5$yvdx]]KB]"| B' nqn.2016-06.io.spdk:cnode6289 00:11:19.293 [2024-12-09 18:00:42.106067] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6289: invalid serial number 'kd2^xa5$yvdx]]KB]"| B' 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:19.293 { 00:11:19.293 "nqn": "nqn.2016-06.io.spdk:cnode6289", 00:11:19.293 "serial_number": "kd2^xa5$yvdx]]KB]\"| B", 00:11:19.293 "method": "nvmf_create_subsystem", 00:11:19.293 "req_id": 1 00:11:19.293 } 00:11:19.293 Got JSON-RPC error response 00:11:19.293 response: 00:11:19.293 { 00:11:19.293 "code": -32602, 00:11:19.293 "message": "Invalid SN kd2^xa5$yvdx]]KB]\"| B" 00:11:19.293 }' 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:19.293 { 00:11:19.293 "nqn": "nqn.2016-06.io.spdk:cnode6289", 00:11:19.293 "serial_number": "kd2^xa5$yvdx]]KB]\"| B", 00:11:19.293 "method": "nvmf_create_subsystem", 00:11:19.293 "req_id": 1 00:11:19.293 } 00:11:19.293 Got JSON-RPC error response 00:11:19.293 response: 00:11:19.293 { 00:11:19.293 "code": -32602, 00:11:19.293 "message": "Invalid SN kd2^xa5$yvdx]]KB]\"| B" 00:11:19.293 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:11:19.293 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.294 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ b == \- ]] 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'b2'\''Z9&;eZ7JKx4'\''.OSai}fa?+G*aH&U^>RrZ4_k' 00:11:19.295 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'b2'\''Z9&;eZ7JKx4'\''.OSai}fa?+G*aH&U^>RrZ4_k' nqn.2016-06.io.spdk:cnode12088 00:11:19.552 [2024-12-09 18:00:42.499364] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12088: invalid model number 'b2'Z9&;eZ7JKx4'.OSai}fa?+G*aH&U^>RrZ4_k' 00:11:19.552 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:11:19.552 { 00:11:19.552 "nqn": "nqn.2016-06.io.spdk:cnode12088", 00:11:19.552 "model_number": "b2'\''Z9&;eZ7\u007fJKx4'\''.OSai}fa?+G\u007f*aH&U^>RrZ4_k", 00:11:19.552 "method": "nvmf_create_subsystem", 00:11:19.552 "req_id": 1 00:11:19.552 } 00:11:19.552 Got JSON-RPC error response 00:11:19.552 response: 00:11:19.552 { 00:11:19.552 "code": -32602, 00:11:19.552 "message": "Invalid MN b2'\''Z9&;eZ7\u007fJKx4'\''.OSai}fa?+G\u007f*aH&U^>RrZ4_k" 00:11:19.552 }' 00:11:19.552 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:11:19.552 { 00:11:19.552 "nqn": "nqn.2016-06.io.spdk:cnode12088", 00:11:19.552 "model_number": "b2'Z9&;eZ7\u007fJKx4'.OSai}fa?+G\u007f*aH&U^>RrZ4_k", 00:11:19.552 "method": "nvmf_create_subsystem", 00:11:19.552 "req_id": 1 00:11:19.552 } 00:11:19.552 Got JSON-RPC error response 00:11:19.552 response: 00:11:19.552 { 00:11:19.552 "code": -32602, 00:11:19.552 "message": "Invalid MN b2'Z9&;eZ7\u007fJKx4'.OSai}fa?+G\u007f*aH&U^>RrZ4_k" 00:11:19.552 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:19.552 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:19.810 [2024-12-09 18:00:42.768326] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.810 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:20.066 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:20.067 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:20.067 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:20.067 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:20.067 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:20.328 [2024-12-09 18:00:43.314156] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:20.328 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:11:20.328 { 00:11:20.328 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:20.328 "listen_address": { 00:11:20.328 "trtype": "tcp", 00:11:20.328 "traddr": "", 00:11:20.328 "trsvcid": "4421" 00:11:20.328 }, 00:11:20.328 "method": "nvmf_subsystem_remove_listener", 00:11:20.328 "req_id": 1 00:11:20.328 } 00:11:20.328 Got JSON-RPC error response 00:11:20.328 response: 00:11:20.328 { 00:11:20.328 "code": -32602, 00:11:20.328 "message": "Invalid parameters" 00:11:20.328 }' 00:11:20.328 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:11:20.328 { 00:11:20.328 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:20.328 "listen_address": { 00:11:20.328 "trtype": "tcp", 00:11:20.328 "traddr": "", 00:11:20.328 "trsvcid": "4421" 00:11:20.328 }, 00:11:20.328 "method": "nvmf_subsystem_remove_listener", 00:11:20.328 "req_id": 1 00:11:20.328 } 00:11:20.328 Got JSON-RPC error response 00:11:20.328 response: 00:11:20.328 { 00:11:20.328 "code": -32602, 00:11:20.328 "message": "Invalid parameters" 00:11:20.329 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:20.329 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18009 -i 0 00:11:20.586 [2024-12-09 18:00:43.595065] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18009: invalid cntlid range [0-65519] 00:11:20.586 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:11:20.586 { 00:11:20.586 "nqn": "nqn.2016-06.io.spdk:cnode18009", 00:11:20.586 "min_cntlid": 0, 00:11:20.586 "method": "nvmf_create_subsystem", 00:11:20.586 "req_id": 1 00:11:20.586 } 00:11:20.586 Got JSON-RPC error response 00:11:20.586 response: 00:11:20.586 { 00:11:20.586 "code": -32602, 00:11:20.586 "message": "Invalid cntlid range [0-65519]" 00:11:20.586 }' 00:11:20.586 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:11:20.586 { 00:11:20.586 "nqn": "nqn.2016-06.io.spdk:cnode18009", 00:11:20.586 "min_cntlid": 0, 00:11:20.586 "method": "nvmf_create_subsystem", 00:11:20.586 "req_id": 1 00:11:20.586 } 00:11:20.586 Got JSON-RPC error response 00:11:20.586 response: 00:11:20.586 { 00:11:20.586 "code": -32602, 00:11:20.586 "message": "Invalid cntlid range [0-65519]" 00:11:20.586 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:20.586 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16754 -i 65520 00:11:20.843 [2024-12-09 18:00:43.871992] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16754: invalid cntlid range [65520-65519] 00:11:21.100 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:11:21.100 { 00:11:21.100 "nqn": "nqn.2016-06.io.spdk:cnode16754", 00:11:21.100 "min_cntlid": 65520, 00:11:21.100 "method": "nvmf_create_subsystem", 00:11:21.100 "req_id": 1 00:11:21.100 } 00:11:21.100 Got JSON-RPC error response 00:11:21.100 response: 00:11:21.100 { 00:11:21.100 "code": -32602, 00:11:21.100 "message": "Invalid cntlid range [65520-65519]" 00:11:21.100 }' 00:11:21.100 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:11:21.100 { 00:11:21.100 "nqn": "nqn.2016-06.io.spdk:cnode16754", 00:11:21.100 "min_cntlid": 65520, 00:11:21.100 "method": "nvmf_create_subsystem", 00:11:21.100 "req_id": 1 00:11:21.100 } 00:11:21.100 Got JSON-RPC error response 00:11:21.100 response: 00:11:21.100 { 00:11:21.100 "code": -32602, 00:11:21.100 "message": "Invalid cntlid range [65520-65519]" 00:11:21.100 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:21.100 18:00:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10749 -I 0 00:11:21.357 [2024-12-09 18:00:44.140846] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10749: invalid cntlid range [1-0] 00:11:21.357 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:11:21.357 { 00:11:21.357 "nqn": "nqn.2016-06.io.spdk:cnode10749", 00:11:21.357 "max_cntlid": 0, 00:11:21.357 "method": "nvmf_create_subsystem", 00:11:21.357 "req_id": 1 00:11:21.357 } 00:11:21.357 Got JSON-RPC error response 00:11:21.357 response: 00:11:21.357 { 00:11:21.357 "code": -32602, 00:11:21.357 "message": "Invalid cntlid range [1-0]" 00:11:21.357 }' 00:11:21.357 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:11:21.357 { 00:11:21.357 "nqn": "nqn.2016-06.io.spdk:cnode10749", 00:11:21.357 "max_cntlid": 0, 00:11:21.357 "method": "nvmf_create_subsystem", 00:11:21.357 "req_id": 1 00:11:21.357 } 00:11:21.357 Got JSON-RPC error response 00:11:21.357 response: 00:11:21.357 { 00:11:21.357 "code": -32602, 00:11:21.357 "message": "Invalid cntlid range [1-0]" 00:11:21.357 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:21.357 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22011 -I 65520 00:11:21.615 [2024-12-09 18:00:44.401710] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22011: invalid cntlid range [1-65520] 00:11:21.615 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:11:21.615 { 00:11:21.615 "nqn": "nqn.2016-06.io.spdk:cnode22011", 00:11:21.615 "max_cntlid": 65520, 00:11:21.615 "method": "nvmf_create_subsystem", 00:11:21.615 "req_id": 1 00:11:21.615 } 00:11:21.615 Got JSON-RPC error response 00:11:21.615 response: 00:11:21.615 { 00:11:21.615 "code": -32602, 00:11:21.615 "message": "Invalid cntlid range [1-65520]" 00:11:21.615 }' 00:11:21.615 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:11:21.615 { 00:11:21.615 "nqn": "nqn.2016-06.io.spdk:cnode22011", 00:11:21.615 "max_cntlid": 65520, 00:11:21.615 "method": "nvmf_create_subsystem", 00:11:21.615 "req_id": 1 00:11:21.615 } 00:11:21.615 Got JSON-RPC error response 00:11:21.615 response: 00:11:21.615 { 00:11:21.615 "code": -32602, 00:11:21.615 "message": "Invalid cntlid range [1-65520]" 00:11:21.615 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:21.615 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14773 -i 6 -I 5 00:11:21.873 [2024-12-09 18:00:44.674632] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14773: invalid cntlid range [6-5] 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:21.873 { 00:11:21.873 "nqn": "nqn.2016-06.io.spdk:cnode14773", 00:11:21.873 "min_cntlid": 6, 00:11:21.873 "max_cntlid": 5, 00:11:21.873 "method": "nvmf_create_subsystem", 00:11:21.873 "req_id": 1 00:11:21.873 } 00:11:21.873 Got JSON-RPC error response 00:11:21.873 response: 00:11:21.873 { 00:11:21.873 "code": -32602, 00:11:21.873 "message": "Invalid cntlid range [6-5]" 00:11:21.873 }' 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:21.873 { 00:11:21.873 "nqn": "nqn.2016-06.io.spdk:cnode14773", 00:11:21.873 "min_cntlid": 6, 00:11:21.873 "max_cntlid": 5, 00:11:21.873 "method": "nvmf_create_subsystem", 00:11:21.873 "req_id": 1 00:11:21.873 } 00:11:21.873 Got JSON-RPC error response 00:11:21.873 response: 00:11:21.873 { 00:11:21.873 "code": -32602, 00:11:21.873 "message": "Invalid cntlid range [6-5]" 00:11:21.873 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:21.873 { 00:11:21.873 "name": "foobar", 00:11:21.873 "method": "nvmf_delete_target", 00:11:21.873 "req_id": 1 00:11:21.873 } 00:11:21.873 Got JSON-RPC error response 00:11:21.873 response: 00:11:21.873 { 00:11:21.873 "code": -32602, 00:11:21.873 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:21.873 }' 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:21.873 { 00:11:21.873 "name": "foobar", 00:11:21.873 "method": "nvmf_delete_target", 00:11:21.873 "req_id": 1 00:11:21.873 } 00:11:21.873 Got JSON-RPC error response 00:11:21.873 response: 00:11:21.873 { 00:11:21.873 "code": -32602, 00:11:21.873 "message": "The specified target doesn't exist, cannot delete it." 00:11:21.873 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:21.873 rmmod nvme_tcp 00:11:21.873 rmmod nvme_fabrics 00:11:21.873 rmmod nvme_keyring 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1428958 ']' 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1428958 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1428958 ']' 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1428958 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.873 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1428958 00:11:22.133 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:22.133 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:22.133 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1428958' 00:11:22.133 killing process with pid 1428958 00:11:22.133 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1428958 00:11:22.133 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1428958 00:11:22.133 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:22.133 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:22.133 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:22.133 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:11:22.133 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:11:22.133 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:22.133 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:11:22.133 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:22.133 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:22.133 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.133 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.133 18:00:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.671 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:24.671 00:11:24.671 real 0m9.148s 00:11:24.671 user 0m21.307s 00:11:24.671 sys 0m2.672s 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:24.672 ************************************ 00:11:24.672 END TEST nvmf_invalid 00:11:24.672 ************************************ 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:24.672 ************************************ 00:11:24.672 START TEST nvmf_connect_stress 00:11:24.672 ************************************ 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:24.672 * Looking for test storage... 00:11:24.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:24.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.672 --rc genhtml_branch_coverage=1 00:11:24.672 --rc genhtml_function_coverage=1 00:11:24.672 --rc genhtml_legend=1 00:11:24.672 --rc geninfo_all_blocks=1 00:11:24.672 --rc geninfo_unexecuted_blocks=1 00:11:24.672 00:11:24.672 ' 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:24.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.672 --rc genhtml_branch_coverage=1 00:11:24.672 --rc genhtml_function_coverage=1 00:11:24.672 --rc genhtml_legend=1 00:11:24.672 --rc geninfo_all_blocks=1 00:11:24.672 --rc geninfo_unexecuted_blocks=1 00:11:24.672 00:11:24.672 ' 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:24.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.672 --rc genhtml_branch_coverage=1 00:11:24.672 --rc genhtml_function_coverage=1 00:11:24.672 --rc genhtml_legend=1 00:11:24.672 --rc geninfo_all_blocks=1 00:11:24.672 --rc geninfo_unexecuted_blocks=1 00:11:24.672 00:11:24.672 ' 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:24.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.672 --rc genhtml_branch_coverage=1 00:11:24.672 --rc genhtml_function_coverage=1 00:11:24.672 --rc genhtml_legend=1 00:11:24.672 --rc geninfo_all_blocks=1 00:11:24.672 --rc geninfo_unexecuted_blocks=1 00:11:24.672 00:11:24.672 ' 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.672 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:11:24.673 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.581 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.581 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:11:26.581 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:26.581 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:26.581 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:26.581 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:26.581 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:26.581 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:11:26.581 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:26.581 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:11:26.581 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:11:26.581 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:11:26.581 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:11:26.581 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:11:26.581 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:26.582 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:26.582 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:26.582 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:26.582 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:26.582 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:26.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:11:26.841 00:11:26.841 --- 10.0.0.2 ping statistics --- 00:11:26.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.841 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:11:26.841 00:11:26.841 --- 10.0.0.1 ping statistics --- 00:11:26.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.841 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1431605 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1431605 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1431605 ']' 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.841 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.841 [2024-12-09 18:00:49.734027] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:11:26.841 [2024-12-09 18:00:49.734129] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.841 [2024-12-09 18:00:49.805829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:26.841 [2024-12-09 18:00:49.859268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.841 [2024-12-09 18:00:49.859326] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.841 [2024-12-09 18:00:49.859350] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.841 [2024-12-09 18:00:49.859361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.841 [2024-12-09 18:00:49.859370] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.841 [2024-12-09 18:00:49.860879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.841 [2024-12-09 18:00:49.861011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.841 [2024-12-09 18:00:49.861015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.100 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.100 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:11:27.100 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:27.100 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:27.100 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.100 [2024-12-09 18:00:50.007194] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.100 [2024-12-09 18:00:50.024671] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.100 NULL1 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1431626 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.100 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.101 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.666 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.666 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:27.666 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.666 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.666 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.924 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.924 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:27.924 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.924 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.924 18:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.182 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.182 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:28.182 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.182 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.182 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.440 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.440 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:28.440 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.440 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.440 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.698 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.698 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:28.698 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.698 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.698 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.264 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.264 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:29.264 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.264 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.264 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.522 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.522 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:29.522 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.522 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.522 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.781 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.781 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:29.781 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.781 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.781 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.039 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.039 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:30.039 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.039 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.039 18:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.297 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.297 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:30.297 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.297 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.297 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.862 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.863 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:30.863 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.863 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.863 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.121 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.121 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:31.121 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.121 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.121 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.379 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.379 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:31.379 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.379 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.379 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.637 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.637 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:31.637 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.637 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.637 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.895 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.895 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:31.895 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.895 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.895 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.461 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.461 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:32.461 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.462 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.462 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.720 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.720 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:32.720 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.720 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.720 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.978 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.978 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:32.978 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.978 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.978 18:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.236 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.236 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:33.236 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.236 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.236 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.494 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.494 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:33.494 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.494 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.494 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.129 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.129 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:34.129 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.129 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.129 18:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.129 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.129 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:34.129 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.129 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.129 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.694 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.694 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:34.694 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.694 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.694 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.951 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.951 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:34.951 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.951 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.951 18:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.209 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.209 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:35.209 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.209 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.209 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.466 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.466 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:35.466 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.466 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.466 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.723 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.724 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:35.724 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.724 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.724 18:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.289 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.289 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:36.289 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.289 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.289 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.546 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.546 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:36.547 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.547 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.547 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.805 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.805 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:36.805 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.805 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.805 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.063 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.063 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:37.063 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.063 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.063 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.320 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1431626 00:11:37.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1431626) - No such process 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1431626 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:37.578 rmmod nvme_tcp 00:11:37.578 rmmod nvme_fabrics 00:11:37.578 rmmod nvme_keyring 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1431605 ']' 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1431605 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1431605 ']' 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1431605 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1431605 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1431605' 00:11:37.578 killing process with pid 1431605 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1431605 00:11:37.578 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1431605 00:11:37.838 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:37.838 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:37.838 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:37.838 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:11:37.838 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:11:37.838 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:37.838 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:11:37.838 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:37.838 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:37.838 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.838 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.838 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.765 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:39.765 00:11:39.765 real 0m15.499s 00:11:39.765 user 0m38.814s 00:11:39.765 sys 0m5.880s 00:11:39.765 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.765 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.765 ************************************ 00:11:39.765 END TEST nvmf_connect_stress 00:11:39.765 ************************************ 00:11:39.765 18:01:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:39.765 18:01:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:39.765 18:01:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.765 18:01:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:39.765 ************************************ 00:11:39.765 START TEST nvmf_fused_ordering 00:11:39.765 ************************************ 00:11:39.765 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:40.025 * Looking for test storage... 00:11:40.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:40.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.025 --rc genhtml_branch_coverage=1 00:11:40.025 --rc genhtml_function_coverage=1 00:11:40.025 --rc genhtml_legend=1 00:11:40.025 --rc geninfo_all_blocks=1 00:11:40.025 --rc geninfo_unexecuted_blocks=1 00:11:40.025 00:11:40.025 ' 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:40.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.025 --rc genhtml_branch_coverage=1 00:11:40.025 --rc genhtml_function_coverage=1 00:11:40.025 --rc genhtml_legend=1 00:11:40.025 --rc geninfo_all_blocks=1 00:11:40.025 --rc geninfo_unexecuted_blocks=1 00:11:40.025 00:11:40.025 ' 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:40.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.025 --rc genhtml_branch_coverage=1 00:11:40.025 --rc genhtml_function_coverage=1 00:11:40.025 --rc genhtml_legend=1 00:11:40.025 --rc geninfo_all_blocks=1 00:11:40.025 --rc geninfo_unexecuted_blocks=1 00:11:40.025 00:11:40.025 ' 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:40.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.025 --rc genhtml_branch_coverage=1 00:11:40.025 --rc genhtml_function_coverage=1 00:11:40.025 --rc genhtml_legend=1 00:11:40.025 --rc geninfo_all_blocks=1 00:11:40.025 --rc geninfo_unexecuted_blocks=1 00:11:40.025 00:11:40.025 ' 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.025 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:40.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:40.026 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:40.026 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:40.026 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:40.026 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:40.026 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:40.026 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.026 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:40.026 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:40.026 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:40.026 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.026 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.026 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.026 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:40.026 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:40.026 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:11:40.026 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:42.561 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:42.562 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:42.562 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:42.562 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:42.562 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:42.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:11:42.562 00:11:42.562 --- 10.0.0.2 ping statistics --- 00:11:42.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.562 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:11:42.562 00:11:42.562 --- 10.0.0.1 ping statistics --- 00:11:42.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.562 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1434873 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1434873 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1434873 ']' 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.562 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:42.562 [2024-12-09 18:01:05.228364] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:11:42.562 [2024-12-09 18:01:05.228469] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.562 [2024-12-09 18:01:05.301247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.562 [2024-12-09 18:01:05.359126] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.562 [2024-12-09 18:01:05.359188] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.562 [2024-12-09 18:01:05.359211] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.563 [2024-12-09 18:01:05.359221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.563 [2024-12-09 18:01:05.359230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.563 [2024-12-09 18:01:05.359759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:42.563 [2024-12-09 18:01:05.493088] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:42.563 [2024-12-09 18:01:05.509286] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:42.563 NULL1 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.563 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:42.563 [2024-12-09 18:01:05.552725] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:11:42.563 [2024-12-09 18:01:05.552760] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1434934 ] 00:11:43.128 Attached to nqn.2016-06.io.spdk:cnode1 00:11:43.128 Namespace ID: 1 size: 1GB 00:11:43.128 fused_ordering(0) 00:11:43.128 fused_ordering(1) 00:11:43.128 fused_ordering(2) 00:11:43.128 fused_ordering(3) 00:11:43.128 fused_ordering(4) 00:11:43.128 fused_ordering(5) 00:11:43.128 fused_ordering(6) 00:11:43.128 fused_ordering(7) 00:11:43.128 fused_ordering(8) 00:11:43.128 fused_ordering(9) 00:11:43.128 fused_ordering(10) 00:11:43.128 fused_ordering(11) 00:11:43.128 fused_ordering(12) 00:11:43.128 fused_ordering(13) 00:11:43.128 fused_ordering(14) 00:11:43.128 fused_ordering(15) 00:11:43.128 fused_ordering(16) 00:11:43.128 fused_ordering(17) 00:11:43.128 fused_ordering(18) 00:11:43.128 fused_ordering(19) 00:11:43.128 fused_ordering(20) 00:11:43.128 fused_ordering(21) 00:11:43.128 fused_ordering(22) 00:11:43.128 fused_ordering(23) 00:11:43.128 fused_ordering(24) 00:11:43.128 fused_ordering(25) 00:11:43.128 fused_ordering(26) 00:11:43.128 fused_ordering(27) 00:11:43.128 fused_ordering(28) 00:11:43.128 fused_ordering(29) 00:11:43.128 fused_ordering(30) 00:11:43.128 fused_ordering(31) 00:11:43.128 fused_ordering(32) 00:11:43.128 fused_ordering(33) 00:11:43.128 fused_ordering(34) 00:11:43.128 fused_ordering(35) 00:11:43.128 fused_ordering(36) 00:11:43.128 fused_ordering(37) 00:11:43.128 fused_ordering(38) 00:11:43.129 fused_ordering(39) 00:11:43.129 fused_ordering(40) 00:11:43.129 fused_ordering(41) 00:11:43.129 fused_ordering(42) 00:11:43.129 fused_ordering(43) 00:11:43.129 fused_ordering(44) 00:11:43.129 fused_ordering(45) 00:11:43.129 fused_ordering(46) 00:11:43.129 fused_ordering(47) 00:11:43.129 fused_ordering(48) 00:11:43.129 fused_ordering(49) 00:11:43.129 fused_ordering(50) 00:11:43.129 fused_ordering(51) 00:11:43.129 fused_ordering(52) 00:11:43.129 fused_ordering(53) 00:11:43.129 fused_ordering(54) 00:11:43.129 fused_ordering(55) 00:11:43.129 fused_ordering(56) 00:11:43.129 fused_ordering(57) 00:11:43.129 fused_ordering(58) 00:11:43.129 fused_ordering(59) 00:11:43.129 fused_ordering(60) 00:11:43.129 fused_ordering(61) 00:11:43.129 fused_ordering(62) 00:11:43.129 fused_ordering(63) 00:11:43.129 fused_ordering(64) 00:11:43.129 fused_ordering(65) 00:11:43.129 fused_ordering(66) 00:11:43.129 fused_ordering(67) 00:11:43.129 fused_ordering(68) 00:11:43.129 fused_ordering(69) 00:11:43.129 fused_ordering(70) 00:11:43.129 fused_ordering(71) 00:11:43.129 fused_ordering(72) 00:11:43.129 fused_ordering(73) 00:11:43.129 fused_ordering(74) 00:11:43.129 fused_ordering(75) 00:11:43.129 fused_ordering(76) 00:11:43.129 fused_ordering(77) 00:11:43.129 fused_ordering(78) 00:11:43.129 fused_ordering(79) 00:11:43.129 fused_ordering(80) 00:11:43.129 fused_ordering(81) 00:11:43.129 fused_ordering(82) 00:11:43.129 fused_ordering(83) 00:11:43.129 fused_ordering(84) 00:11:43.129 fused_ordering(85) 00:11:43.129 fused_ordering(86) 00:11:43.129 fused_ordering(87) 00:11:43.129 fused_ordering(88) 00:11:43.129 fused_ordering(89) 00:11:43.129 fused_ordering(90) 00:11:43.129 fused_ordering(91) 00:11:43.129 fused_ordering(92) 00:11:43.129 fused_ordering(93) 00:11:43.129 fused_ordering(94) 00:11:43.129 fused_ordering(95) 00:11:43.129 fused_ordering(96) 00:11:43.129 fused_ordering(97) 00:11:43.129 fused_ordering(98) 00:11:43.129 fused_ordering(99) 00:11:43.129 fused_ordering(100) 00:11:43.129 fused_ordering(101) 00:11:43.129 fused_ordering(102) 00:11:43.129 fused_ordering(103) 00:11:43.129 fused_ordering(104) 00:11:43.129 fused_ordering(105) 00:11:43.129 fused_ordering(106) 00:11:43.129 fused_ordering(107) 00:11:43.129 fused_ordering(108) 00:11:43.129 fused_ordering(109) 00:11:43.129 fused_ordering(110) 00:11:43.129 fused_ordering(111) 00:11:43.129 fused_ordering(112) 00:11:43.129 fused_ordering(113) 00:11:43.129 fused_ordering(114) 00:11:43.129 fused_ordering(115) 00:11:43.129 fused_ordering(116) 00:11:43.129 fused_ordering(117) 00:11:43.129 fused_ordering(118) 00:11:43.129 fused_ordering(119) 00:11:43.129 fused_ordering(120) 00:11:43.129 fused_ordering(121) 00:11:43.129 fused_ordering(122) 00:11:43.129 fused_ordering(123) 00:11:43.129 fused_ordering(124) 00:11:43.129 fused_ordering(125) 00:11:43.129 fused_ordering(126) 00:11:43.129 fused_ordering(127) 00:11:43.129 fused_ordering(128) 00:11:43.129 fused_ordering(129) 00:11:43.129 fused_ordering(130) 00:11:43.129 fused_ordering(131) 00:11:43.129 fused_ordering(132) 00:11:43.129 fused_ordering(133) 00:11:43.129 fused_ordering(134) 00:11:43.129 fused_ordering(135) 00:11:43.129 fused_ordering(136) 00:11:43.129 fused_ordering(137) 00:11:43.129 fused_ordering(138) 00:11:43.129 fused_ordering(139) 00:11:43.129 fused_ordering(140) 00:11:43.129 fused_ordering(141) 00:11:43.129 fused_ordering(142) 00:11:43.129 fused_ordering(143) 00:11:43.129 fused_ordering(144) 00:11:43.129 fused_ordering(145) 00:11:43.129 fused_ordering(146) 00:11:43.129 fused_ordering(147) 00:11:43.129 fused_ordering(148) 00:11:43.129 fused_ordering(149) 00:11:43.129 fused_ordering(150) 00:11:43.129 fused_ordering(151) 00:11:43.129 fused_ordering(152) 00:11:43.129 fused_ordering(153) 00:11:43.129 fused_ordering(154) 00:11:43.129 fused_ordering(155) 00:11:43.129 fused_ordering(156) 00:11:43.129 fused_ordering(157) 00:11:43.129 fused_ordering(158) 00:11:43.129 fused_ordering(159) 00:11:43.129 fused_ordering(160) 00:11:43.129 fused_ordering(161) 00:11:43.129 fused_ordering(162) 00:11:43.129 fused_ordering(163) 00:11:43.129 fused_ordering(164) 00:11:43.129 fused_ordering(165) 00:11:43.129 fused_ordering(166) 00:11:43.129 fused_ordering(167) 00:11:43.129 fused_ordering(168) 00:11:43.129 fused_ordering(169) 00:11:43.129 fused_ordering(170) 00:11:43.129 fused_ordering(171) 00:11:43.129 fused_ordering(172) 00:11:43.129 fused_ordering(173) 00:11:43.129 fused_ordering(174) 00:11:43.129 fused_ordering(175) 00:11:43.129 fused_ordering(176) 00:11:43.129 fused_ordering(177) 00:11:43.129 fused_ordering(178) 00:11:43.129 fused_ordering(179) 00:11:43.129 fused_ordering(180) 00:11:43.129 fused_ordering(181) 00:11:43.129 fused_ordering(182) 00:11:43.129 fused_ordering(183) 00:11:43.129 fused_ordering(184) 00:11:43.129 fused_ordering(185) 00:11:43.129 fused_ordering(186) 00:11:43.129 fused_ordering(187) 00:11:43.129 fused_ordering(188) 00:11:43.129 fused_ordering(189) 00:11:43.129 fused_ordering(190) 00:11:43.129 fused_ordering(191) 00:11:43.129 fused_ordering(192) 00:11:43.129 fused_ordering(193) 00:11:43.129 fused_ordering(194) 00:11:43.129 fused_ordering(195) 00:11:43.129 fused_ordering(196) 00:11:43.129 fused_ordering(197) 00:11:43.129 fused_ordering(198) 00:11:43.129 fused_ordering(199) 00:11:43.129 fused_ordering(200) 00:11:43.129 fused_ordering(201) 00:11:43.129 fused_ordering(202) 00:11:43.129 fused_ordering(203) 00:11:43.129 fused_ordering(204) 00:11:43.129 fused_ordering(205) 00:11:43.387 fused_ordering(206) 00:11:43.387 fused_ordering(207) 00:11:43.387 fused_ordering(208) 00:11:43.387 fused_ordering(209) 00:11:43.387 fused_ordering(210) 00:11:43.387 fused_ordering(211) 00:11:43.387 fused_ordering(212) 00:11:43.387 fused_ordering(213) 00:11:43.387 fused_ordering(214) 00:11:43.387 fused_ordering(215) 00:11:43.387 fused_ordering(216) 00:11:43.387 fused_ordering(217) 00:11:43.387 fused_ordering(218) 00:11:43.387 fused_ordering(219) 00:11:43.387 fused_ordering(220) 00:11:43.387 fused_ordering(221) 00:11:43.387 fused_ordering(222) 00:11:43.387 fused_ordering(223) 00:11:43.387 fused_ordering(224) 00:11:43.387 fused_ordering(225) 00:11:43.387 fused_ordering(226) 00:11:43.387 fused_ordering(227) 00:11:43.387 fused_ordering(228) 00:11:43.387 fused_ordering(229) 00:11:43.387 fused_ordering(230) 00:11:43.387 fused_ordering(231) 00:11:43.387 fused_ordering(232) 00:11:43.387 fused_ordering(233) 00:11:43.387 fused_ordering(234) 00:11:43.387 fused_ordering(235) 00:11:43.387 fused_ordering(236) 00:11:43.387 fused_ordering(237) 00:11:43.387 fused_ordering(238) 00:11:43.387 fused_ordering(239) 00:11:43.387 fused_ordering(240) 00:11:43.387 fused_ordering(241) 00:11:43.387 fused_ordering(242) 00:11:43.387 fused_ordering(243) 00:11:43.387 fused_ordering(244) 00:11:43.387 fused_ordering(245) 00:11:43.387 fused_ordering(246) 00:11:43.387 fused_ordering(247) 00:11:43.387 fused_ordering(248) 00:11:43.387 fused_ordering(249) 00:11:43.387 fused_ordering(250) 00:11:43.387 fused_ordering(251) 00:11:43.387 fused_ordering(252) 00:11:43.387 fused_ordering(253) 00:11:43.387 fused_ordering(254) 00:11:43.387 fused_ordering(255) 00:11:43.387 fused_ordering(256) 00:11:43.387 fused_ordering(257) 00:11:43.387 fused_ordering(258) 00:11:43.387 fused_ordering(259) 00:11:43.387 fused_ordering(260) 00:11:43.387 fused_ordering(261) 00:11:43.387 fused_ordering(262) 00:11:43.387 fused_ordering(263) 00:11:43.387 fused_ordering(264) 00:11:43.387 fused_ordering(265) 00:11:43.387 fused_ordering(266) 00:11:43.387 fused_ordering(267) 00:11:43.387 fused_ordering(268) 00:11:43.387 fused_ordering(269) 00:11:43.387 fused_ordering(270) 00:11:43.387 fused_ordering(271) 00:11:43.387 fused_ordering(272) 00:11:43.387 fused_ordering(273) 00:11:43.387 fused_ordering(274) 00:11:43.387 fused_ordering(275) 00:11:43.387 fused_ordering(276) 00:11:43.387 fused_ordering(277) 00:11:43.387 fused_ordering(278) 00:11:43.387 fused_ordering(279) 00:11:43.387 fused_ordering(280) 00:11:43.387 fused_ordering(281) 00:11:43.387 fused_ordering(282) 00:11:43.387 fused_ordering(283) 00:11:43.387 fused_ordering(284) 00:11:43.387 fused_ordering(285) 00:11:43.387 fused_ordering(286) 00:11:43.387 fused_ordering(287) 00:11:43.387 fused_ordering(288) 00:11:43.387 fused_ordering(289) 00:11:43.387 fused_ordering(290) 00:11:43.387 fused_ordering(291) 00:11:43.387 fused_ordering(292) 00:11:43.387 fused_ordering(293) 00:11:43.387 fused_ordering(294) 00:11:43.387 fused_ordering(295) 00:11:43.387 fused_ordering(296) 00:11:43.387 fused_ordering(297) 00:11:43.387 fused_ordering(298) 00:11:43.387 fused_ordering(299) 00:11:43.387 fused_ordering(300) 00:11:43.387 fused_ordering(301) 00:11:43.387 fused_ordering(302) 00:11:43.387 fused_ordering(303) 00:11:43.387 fused_ordering(304) 00:11:43.387 fused_ordering(305) 00:11:43.387 fused_ordering(306) 00:11:43.387 fused_ordering(307) 00:11:43.387 fused_ordering(308) 00:11:43.387 fused_ordering(309) 00:11:43.387 fused_ordering(310) 00:11:43.387 fused_ordering(311) 00:11:43.387 fused_ordering(312) 00:11:43.387 fused_ordering(313) 00:11:43.387 fused_ordering(314) 00:11:43.387 fused_ordering(315) 00:11:43.387 fused_ordering(316) 00:11:43.387 fused_ordering(317) 00:11:43.387 fused_ordering(318) 00:11:43.387 fused_ordering(319) 00:11:43.387 fused_ordering(320) 00:11:43.387 fused_ordering(321) 00:11:43.387 fused_ordering(322) 00:11:43.387 fused_ordering(323) 00:11:43.387 fused_ordering(324) 00:11:43.387 fused_ordering(325) 00:11:43.387 fused_ordering(326) 00:11:43.387 fused_ordering(327) 00:11:43.387 fused_ordering(328) 00:11:43.387 fused_ordering(329) 00:11:43.387 fused_ordering(330) 00:11:43.387 fused_ordering(331) 00:11:43.387 fused_ordering(332) 00:11:43.387 fused_ordering(333) 00:11:43.387 fused_ordering(334) 00:11:43.387 fused_ordering(335) 00:11:43.387 fused_ordering(336) 00:11:43.387 fused_ordering(337) 00:11:43.387 fused_ordering(338) 00:11:43.387 fused_ordering(339) 00:11:43.387 fused_ordering(340) 00:11:43.387 fused_ordering(341) 00:11:43.387 fused_ordering(342) 00:11:43.387 fused_ordering(343) 00:11:43.387 fused_ordering(344) 00:11:43.387 fused_ordering(345) 00:11:43.387 fused_ordering(346) 00:11:43.387 fused_ordering(347) 00:11:43.387 fused_ordering(348) 00:11:43.387 fused_ordering(349) 00:11:43.387 fused_ordering(350) 00:11:43.387 fused_ordering(351) 00:11:43.387 fused_ordering(352) 00:11:43.387 fused_ordering(353) 00:11:43.387 fused_ordering(354) 00:11:43.387 fused_ordering(355) 00:11:43.387 fused_ordering(356) 00:11:43.387 fused_ordering(357) 00:11:43.387 fused_ordering(358) 00:11:43.387 fused_ordering(359) 00:11:43.387 fused_ordering(360) 00:11:43.387 fused_ordering(361) 00:11:43.387 fused_ordering(362) 00:11:43.387 fused_ordering(363) 00:11:43.387 fused_ordering(364) 00:11:43.387 fused_ordering(365) 00:11:43.387 fused_ordering(366) 00:11:43.387 fused_ordering(367) 00:11:43.387 fused_ordering(368) 00:11:43.387 fused_ordering(369) 00:11:43.387 fused_ordering(370) 00:11:43.387 fused_ordering(371) 00:11:43.387 fused_ordering(372) 00:11:43.387 fused_ordering(373) 00:11:43.387 fused_ordering(374) 00:11:43.387 fused_ordering(375) 00:11:43.387 fused_ordering(376) 00:11:43.387 fused_ordering(377) 00:11:43.387 fused_ordering(378) 00:11:43.387 fused_ordering(379) 00:11:43.387 fused_ordering(380) 00:11:43.387 fused_ordering(381) 00:11:43.387 fused_ordering(382) 00:11:43.387 fused_ordering(383) 00:11:43.387 fused_ordering(384) 00:11:43.387 fused_ordering(385) 00:11:43.387 fused_ordering(386) 00:11:43.387 fused_ordering(387) 00:11:43.387 fused_ordering(388) 00:11:43.387 fused_ordering(389) 00:11:43.387 fused_ordering(390) 00:11:43.387 fused_ordering(391) 00:11:43.387 fused_ordering(392) 00:11:43.387 fused_ordering(393) 00:11:43.387 fused_ordering(394) 00:11:43.387 fused_ordering(395) 00:11:43.387 fused_ordering(396) 00:11:43.387 fused_ordering(397) 00:11:43.387 fused_ordering(398) 00:11:43.387 fused_ordering(399) 00:11:43.387 fused_ordering(400) 00:11:43.387 fused_ordering(401) 00:11:43.387 fused_ordering(402) 00:11:43.387 fused_ordering(403) 00:11:43.387 fused_ordering(404) 00:11:43.387 fused_ordering(405) 00:11:43.387 fused_ordering(406) 00:11:43.387 fused_ordering(407) 00:11:43.387 fused_ordering(408) 00:11:43.387 fused_ordering(409) 00:11:43.387 fused_ordering(410) 00:11:43.952 fused_ordering(411) 00:11:43.952 fused_ordering(412) 00:11:43.952 fused_ordering(413) 00:11:43.952 fused_ordering(414) 00:11:43.952 fused_ordering(415) 00:11:43.952 fused_ordering(416) 00:11:43.952 fused_ordering(417) 00:11:43.952 fused_ordering(418) 00:11:43.952 fused_ordering(419) 00:11:43.952 fused_ordering(420) 00:11:43.952 fused_ordering(421) 00:11:43.952 fused_ordering(422) 00:11:43.952 fused_ordering(423) 00:11:43.952 fused_ordering(424) 00:11:43.952 fused_ordering(425) 00:11:43.952 fused_ordering(426) 00:11:43.952 fused_ordering(427) 00:11:43.952 fused_ordering(428) 00:11:43.952 fused_ordering(429) 00:11:43.952 fused_ordering(430) 00:11:43.952 fused_ordering(431) 00:11:43.952 fused_ordering(432) 00:11:43.952 fused_ordering(433) 00:11:43.952 fused_ordering(434) 00:11:43.952 fused_ordering(435) 00:11:43.952 fused_ordering(436) 00:11:43.952 fused_ordering(437) 00:11:43.952 fused_ordering(438) 00:11:43.952 fused_ordering(439) 00:11:43.952 fused_ordering(440) 00:11:43.952 fused_ordering(441) 00:11:43.952 fused_ordering(442) 00:11:43.952 fused_ordering(443) 00:11:43.952 fused_ordering(444) 00:11:43.952 fused_ordering(445) 00:11:43.952 fused_ordering(446) 00:11:43.952 fused_ordering(447) 00:11:43.952 fused_ordering(448) 00:11:43.952 fused_ordering(449) 00:11:43.952 fused_ordering(450) 00:11:43.952 fused_ordering(451) 00:11:43.952 fused_ordering(452) 00:11:43.952 fused_ordering(453) 00:11:43.952 fused_ordering(454) 00:11:43.952 fused_ordering(455) 00:11:43.952 fused_ordering(456) 00:11:43.952 fused_ordering(457) 00:11:43.952 fused_ordering(458) 00:11:43.952 fused_ordering(459) 00:11:43.952 fused_ordering(460) 00:11:43.952 fused_ordering(461) 00:11:43.952 fused_ordering(462) 00:11:43.952 fused_ordering(463) 00:11:43.952 fused_ordering(464) 00:11:43.952 fused_ordering(465) 00:11:43.952 fused_ordering(466) 00:11:43.952 fused_ordering(467) 00:11:43.952 fused_ordering(468) 00:11:43.952 fused_ordering(469) 00:11:43.952 fused_ordering(470) 00:11:43.952 fused_ordering(471) 00:11:43.952 fused_ordering(472) 00:11:43.952 fused_ordering(473) 00:11:43.952 fused_ordering(474) 00:11:43.952 fused_ordering(475) 00:11:43.952 fused_ordering(476) 00:11:43.952 fused_ordering(477) 00:11:43.952 fused_ordering(478) 00:11:43.952 fused_ordering(479) 00:11:43.952 fused_ordering(480) 00:11:43.952 fused_ordering(481) 00:11:43.952 fused_ordering(482) 00:11:43.952 fused_ordering(483) 00:11:43.952 fused_ordering(484) 00:11:43.952 fused_ordering(485) 00:11:43.952 fused_ordering(486) 00:11:43.952 fused_ordering(487) 00:11:43.952 fused_ordering(488) 00:11:43.952 fused_ordering(489) 00:11:43.952 fused_ordering(490) 00:11:43.952 fused_ordering(491) 00:11:43.952 fused_ordering(492) 00:11:43.952 fused_ordering(493) 00:11:43.952 fused_ordering(494) 00:11:43.952 fused_ordering(495) 00:11:43.952 fused_ordering(496) 00:11:43.952 fused_ordering(497) 00:11:43.952 fused_ordering(498) 00:11:43.952 fused_ordering(499) 00:11:43.952 fused_ordering(500) 00:11:43.952 fused_ordering(501) 00:11:43.952 fused_ordering(502) 00:11:43.952 fused_ordering(503) 00:11:43.952 fused_ordering(504) 00:11:43.952 fused_ordering(505) 00:11:43.952 fused_ordering(506) 00:11:43.952 fused_ordering(507) 00:11:43.952 fused_ordering(508) 00:11:43.952 fused_ordering(509) 00:11:43.952 fused_ordering(510) 00:11:43.952 fused_ordering(511) 00:11:43.952 fused_ordering(512) 00:11:43.952 fused_ordering(513) 00:11:43.952 fused_ordering(514) 00:11:43.952 fused_ordering(515) 00:11:43.952 fused_ordering(516) 00:11:43.952 fused_ordering(517) 00:11:43.952 fused_ordering(518) 00:11:43.952 fused_ordering(519) 00:11:43.952 fused_ordering(520) 00:11:43.952 fused_ordering(521) 00:11:43.952 fused_ordering(522) 00:11:43.952 fused_ordering(523) 00:11:43.952 fused_ordering(524) 00:11:43.952 fused_ordering(525) 00:11:43.952 fused_ordering(526) 00:11:43.952 fused_ordering(527) 00:11:43.952 fused_ordering(528) 00:11:43.952 fused_ordering(529) 00:11:43.952 fused_ordering(530) 00:11:43.952 fused_ordering(531) 00:11:43.952 fused_ordering(532) 00:11:43.952 fused_ordering(533) 00:11:43.952 fused_ordering(534) 00:11:43.952 fused_ordering(535) 00:11:43.952 fused_ordering(536) 00:11:43.952 fused_ordering(537) 00:11:43.952 fused_ordering(538) 00:11:43.952 fused_ordering(539) 00:11:43.952 fused_ordering(540) 00:11:43.952 fused_ordering(541) 00:11:43.952 fused_ordering(542) 00:11:43.952 fused_ordering(543) 00:11:43.952 fused_ordering(544) 00:11:43.952 fused_ordering(545) 00:11:43.952 fused_ordering(546) 00:11:43.952 fused_ordering(547) 00:11:43.952 fused_ordering(548) 00:11:43.952 fused_ordering(549) 00:11:43.952 fused_ordering(550) 00:11:43.952 fused_ordering(551) 00:11:43.952 fused_ordering(552) 00:11:43.952 fused_ordering(553) 00:11:43.952 fused_ordering(554) 00:11:43.952 fused_ordering(555) 00:11:43.952 fused_ordering(556) 00:11:43.952 fused_ordering(557) 00:11:43.952 fused_ordering(558) 00:11:43.952 fused_ordering(559) 00:11:43.952 fused_ordering(560) 00:11:43.952 fused_ordering(561) 00:11:43.952 fused_ordering(562) 00:11:43.952 fused_ordering(563) 00:11:43.952 fused_ordering(564) 00:11:43.952 fused_ordering(565) 00:11:43.952 fused_ordering(566) 00:11:43.952 fused_ordering(567) 00:11:43.952 fused_ordering(568) 00:11:43.952 fused_ordering(569) 00:11:43.952 fused_ordering(570) 00:11:43.952 fused_ordering(571) 00:11:43.952 fused_ordering(572) 00:11:43.952 fused_ordering(573) 00:11:43.952 fused_ordering(574) 00:11:43.952 fused_ordering(575) 00:11:43.952 fused_ordering(576) 00:11:43.952 fused_ordering(577) 00:11:43.952 fused_ordering(578) 00:11:43.952 fused_ordering(579) 00:11:43.952 fused_ordering(580) 00:11:43.952 fused_ordering(581) 00:11:43.952 fused_ordering(582) 00:11:43.953 fused_ordering(583) 00:11:43.953 fused_ordering(584) 00:11:43.953 fused_ordering(585) 00:11:43.953 fused_ordering(586) 00:11:43.953 fused_ordering(587) 00:11:43.953 fused_ordering(588) 00:11:43.953 fused_ordering(589) 00:11:43.953 fused_ordering(590) 00:11:43.953 fused_ordering(591) 00:11:43.953 fused_ordering(592) 00:11:43.953 fused_ordering(593) 00:11:43.953 fused_ordering(594) 00:11:43.953 fused_ordering(595) 00:11:43.953 fused_ordering(596) 00:11:43.953 fused_ordering(597) 00:11:43.953 fused_ordering(598) 00:11:43.953 fused_ordering(599) 00:11:43.953 fused_ordering(600) 00:11:43.953 fused_ordering(601) 00:11:43.953 fused_ordering(602) 00:11:43.953 fused_ordering(603) 00:11:43.953 fused_ordering(604) 00:11:43.953 fused_ordering(605) 00:11:43.953 fused_ordering(606) 00:11:43.953 fused_ordering(607) 00:11:43.953 fused_ordering(608) 00:11:43.953 fused_ordering(609) 00:11:43.953 fused_ordering(610) 00:11:43.953 fused_ordering(611) 00:11:43.953 fused_ordering(612) 00:11:43.953 fused_ordering(613) 00:11:43.953 fused_ordering(614) 00:11:43.953 fused_ordering(615) 00:11:44.518 fused_ordering(616) 00:11:44.518 fused_ordering(617) 00:11:44.518 fused_ordering(618) 00:11:44.518 fused_ordering(619) 00:11:44.518 fused_ordering(620) 00:11:44.518 fused_ordering(621) 00:11:44.518 fused_ordering(622) 00:11:44.518 fused_ordering(623) 00:11:44.518 fused_ordering(624) 00:11:44.518 fused_ordering(625) 00:11:44.518 fused_ordering(626) 00:11:44.518 fused_ordering(627) 00:11:44.518 fused_ordering(628) 00:11:44.518 fused_ordering(629) 00:11:44.518 fused_ordering(630) 00:11:44.518 fused_ordering(631) 00:11:44.518 fused_ordering(632) 00:11:44.518 fused_ordering(633) 00:11:44.518 fused_ordering(634) 00:11:44.518 fused_ordering(635) 00:11:44.518 fused_ordering(636) 00:11:44.518 fused_ordering(637) 00:11:44.518 fused_ordering(638) 00:11:44.518 fused_ordering(639) 00:11:44.518 fused_ordering(640) 00:11:44.518 fused_ordering(641) 00:11:44.518 fused_ordering(642) 00:11:44.518 fused_ordering(643) 00:11:44.518 fused_ordering(644) 00:11:44.518 fused_ordering(645) 00:11:44.518 fused_ordering(646) 00:11:44.518 fused_ordering(647) 00:11:44.518 fused_ordering(648) 00:11:44.518 fused_ordering(649) 00:11:44.518 fused_ordering(650) 00:11:44.518 fused_ordering(651) 00:11:44.518 fused_ordering(652) 00:11:44.518 fused_ordering(653) 00:11:44.518 fused_ordering(654) 00:11:44.518 fused_ordering(655) 00:11:44.518 fused_ordering(656) 00:11:44.518 fused_ordering(657) 00:11:44.518 fused_ordering(658) 00:11:44.518 fused_ordering(659) 00:11:44.518 fused_ordering(660) 00:11:44.518 fused_ordering(661) 00:11:44.518 fused_ordering(662) 00:11:44.518 fused_ordering(663) 00:11:44.518 fused_ordering(664) 00:11:44.518 fused_ordering(665) 00:11:44.518 fused_ordering(666) 00:11:44.518 fused_ordering(667) 00:11:44.518 fused_ordering(668) 00:11:44.518 fused_ordering(669) 00:11:44.518 fused_ordering(670) 00:11:44.518 fused_ordering(671) 00:11:44.518 fused_ordering(672) 00:11:44.518 fused_ordering(673) 00:11:44.518 fused_ordering(674) 00:11:44.518 fused_ordering(675) 00:11:44.518 fused_ordering(676) 00:11:44.518 fused_ordering(677) 00:11:44.518 fused_ordering(678) 00:11:44.518 fused_ordering(679) 00:11:44.518 fused_ordering(680) 00:11:44.518 fused_ordering(681) 00:11:44.518 fused_ordering(682) 00:11:44.518 fused_ordering(683) 00:11:44.518 fused_ordering(684) 00:11:44.518 fused_ordering(685) 00:11:44.518 fused_ordering(686) 00:11:44.518 fused_ordering(687) 00:11:44.518 fused_ordering(688) 00:11:44.518 fused_ordering(689) 00:11:44.518 fused_ordering(690) 00:11:44.518 fused_ordering(691) 00:11:44.518 fused_ordering(692) 00:11:44.518 fused_ordering(693) 00:11:44.518 fused_ordering(694) 00:11:44.518 fused_ordering(695) 00:11:44.518 fused_ordering(696) 00:11:44.518 fused_ordering(697) 00:11:44.518 fused_ordering(698) 00:11:44.518 fused_ordering(699) 00:11:44.518 fused_ordering(700) 00:11:44.518 fused_ordering(701) 00:11:44.518 fused_ordering(702) 00:11:44.518 fused_ordering(703) 00:11:44.518 fused_ordering(704) 00:11:44.518 fused_ordering(705) 00:11:44.518 fused_ordering(706) 00:11:44.518 fused_ordering(707) 00:11:44.518 fused_ordering(708) 00:11:44.518 fused_ordering(709) 00:11:44.518 fused_ordering(710) 00:11:44.518 fused_ordering(711) 00:11:44.518 fused_ordering(712) 00:11:44.518 fused_ordering(713) 00:11:44.518 fused_ordering(714) 00:11:44.518 fused_ordering(715) 00:11:44.518 fused_ordering(716) 00:11:44.518 fused_ordering(717) 00:11:44.518 fused_ordering(718) 00:11:44.518 fused_ordering(719) 00:11:44.518 fused_ordering(720) 00:11:44.518 fused_ordering(721) 00:11:44.518 fused_ordering(722) 00:11:44.518 fused_ordering(723) 00:11:44.518 fused_ordering(724) 00:11:44.518 fused_ordering(725) 00:11:44.518 fused_ordering(726) 00:11:44.518 fused_ordering(727) 00:11:44.518 fused_ordering(728) 00:11:44.518 fused_ordering(729) 00:11:44.518 fused_ordering(730) 00:11:44.518 fused_ordering(731) 00:11:44.518 fused_ordering(732) 00:11:44.518 fused_ordering(733) 00:11:44.518 fused_ordering(734) 00:11:44.518 fused_ordering(735) 00:11:44.518 fused_ordering(736) 00:11:44.518 fused_ordering(737) 00:11:44.518 fused_ordering(738) 00:11:44.518 fused_ordering(739) 00:11:44.518 fused_ordering(740) 00:11:44.518 fused_ordering(741) 00:11:44.518 fused_ordering(742) 00:11:44.518 fused_ordering(743) 00:11:44.518 fused_ordering(744) 00:11:44.518 fused_ordering(745) 00:11:44.518 fused_ordering(746) 00:11:44.518 fused_ordering(747) 00:11:44.518 fused_ordering(748) 00:11:44.518 fused_ordering(749) 00:11:44.518 fused_ordering(750) 00:11:44.518 fused_ordering(751) 00:11:44.518 fused_ordering(752) 00:11:44.518 fused_ordering(753) 00:11:44.518 fused_ordering(754) 00:11:44.518 fused_ordering(755) 00:11:44.518 fused_ordering(756) 00:11:44.518 fused_ordering(757) 00:11:44.518 fused_ordering(758) 00:11:44.518 fused_ordering(759) 00:11:44.518 fused_ordering(760) 00:11:44.518 fused_ordering(761) 00:11:44.518 fused_ordering(762) 00:11:44.518 fused_ordering(763) 00:11:44.518 fused_ordering(764) 00:11:44.518 fused_ordering(765) 00:11:44.518 fused_ordering(766) 00:11:44.518 fused_ordering(767) 00:11:44.518 fused_ordering(768) 00:11:44.518 fused_ordering(769) 00:11:44.518 fused_ordering(770) 00:11:44.518 fused_ordering(771) 00:11:44.518 fused_ordering(772) 00:11:44.518 fused_ordering(773) 00:11:44.518 fused_ordering(774) 00:11:44.518 fused_ordering(775) 00:11:44.518 fused_ordering(776) 00:11:44.518 fused_ordering(777) 00:11:44.518 fused_ordering(778) 00:11:44.518 fused_ordering(779) 00:11:44.518 fused_ordering(780) 00:11:44.518 fused_ordering(781) 00:11:44.518 fused_ordering(782) 00:11:44.518 fused_ordering(783) 00:11:44.518 fused_ordering(784) 00:11:44.518 fused_ordering(785) 00:11:44.518 fused_ordering(786) 00:11:44.518 fused_ordering(787) 00:11:44.518 fused_ordering(788) 00:11:44.518 fused_ordering(789) 00:11:44.518 fused_ordering(790) 00:11:44.518 fused_ordering(791) 00:11:44.518 fused_ordering(792) 00:11:44.518 fused_ordering(793) 00:11:44.518 fused_ordering(794) 00:11:44.518 fused_ordering(795) 00:11:44.518 fused_ordering(796) 00:11:44.518 fused_ordering(797) 00:11:44.518 fused_ordering(798) 00:11:44.518 fused_ordering(799) 00:11:44.518 fused_ordering(800) 00:11:44.518 fused_ordering(801) 00:11:44.518 fused_ordering(802) 00:11:44.518 fused_ordering(803) 00:11:44.518 fused_ordering(804) 00:11:44.518 fused_ordering(805) 00:11:44.518 fused_ordering(806) 00:11:44.518 fused_ordering(807) 00:11:44.518 fused_ordering(808) 00:11:44.518 fused_ordering(809) 00:11:44.518 fused_ordering(810) 00:11:44.518 fused_ordering(811) 00:11:44.518 fused_ordering(812) 00:11:44.518 fused_ordering(813) 00:11:44.518 fused_ordering(814) 00:11:44.518 fused_ordering(815) 00:11:44.518 fused_ordering(816) 00:11:44.518 fused_ordering(817) 00:11:44.518 fused_ordering(818) 00:11:44.518 fused_ordering(819) 00:11:44.518 fused_ordering(820) 00:11:45.084 fused_ordering(821) 00:11:45.084 fused_ordering(822) 00:11:45.084 fused_ordering(823) 00:11:45.084 fused_ordering(824) 00:11:45.084 fused_ordering(825) 00:11:45.084 fused_ordering(826) 00:11:45.084 fused_ordering(827) 00:11:45.084 fused_ordering(828) 00:11:45.084 fused_ordering(829) 00:11:45.084 fused_ordering(830) 00:11:45.084 fused_ordering(831) 00:11:45.084 fused_ordering(832) 00:11:45.084 fused_ordering(833) 00:11:45.084 fused_ordering(834) 00:11:45.084 fused_ordering(835) 00:11:45.084 fused_ordering(836) 00:11:45.084 fused_ordering(837) 00:11:45.084 fused_ordering(838) 00:11:45.084 fused_ordering(839) 00:11:45.085 fused_ordering(840) 00:11:45.085 fused_ordering(841) 00:11:45.085 fused_ordering(842) 00:11:45.085 fused_ordering(843) 00:11:45.085 fused_ordering(844) 00:11:45.085 fused_ordering(845) 00:11:45.085 fused_ordering(846) 00:11:45.085 fused_ordering(847) 00:11:45.085 fused_ordering(848) 00:11:45.085 fused_ordering(849) 00:11:45.085 fused_ordering(850) 00:11:45.085 fused_ordering(851) 00:11:45.085 fused_ordering(852) 00:11:45.085 fused_ordering(853) 00:11:45.085 fused_ordering(854) 00:11:45.085 fused_ordering(855) 00:11:45.085 fused_ordering(856) 00:11:45.085 fused_ordering(857) 00:11:45.085 fused_ordering(858) 00:11:45.085 fused_ordering(859) 00:11:45.085 fused_ordering(860) 00:11:45.085 fused_ordering(861) 00:11:45.085 fused_ordering(862) 00:11:45.085 fused_ordering(863) 00:11:45.085 fused_ordering(864) 00:11:45.085 fused_ordering(865) 00:11:45.085 fused_ordering(866) 00:11:45.085 fused_ordering(867) 00:11:45.085 fused_ordering(868) 00:11:45.085 fused_ordering(869) 00:11:45.085 fused_ordering(870) 00:11:45.085 fused_ordering(871) 00:11:45.085 fused_ordering(872) 00:11:45.085 fused_ordering(873) 00:11:45.085 fused_ordering(874) 00:11:45.085 fused_ordering(875) 00:11:45.085 fused_ordering(876) 00:11:45.085 fused_ordering(877) 00:11:45.085 fused_ordering(878) 00:11:45.085 fused_ordering(879) 00:11:45.085 fused_ordering(880) 00:11:45.085 fused_ordering(881) 00:11:45.085 fused_ordering(882) 00:11:45.085 fused_ordering(883) 00:11:45.085 fused_ordering(884) 00:11:45.085 fused_ordering(885) 00:11:45.085 fused_ordering(886) 00:11:45.085 fused_ordering(887) 00:11:45.085 fused_ordering(888) 00:11:45.085 fused_ordering(889) 00:11:45.085 fused_ordering(890) 00:11:45.085 fused_ordering(891) 00:11:45.085 fused_ordering(892) 00:11:45.085 fused_ordering(893) 00:11:45.085 fused_ordering(894) 00:11:45.085 fused_ordering(895) 00:11:45.085 fused_ordering(896) 00:11:45.085 fused_ordering(897) 00:11:45.085 fused_ordering(898) 00:11:45.085 fused_ordering(899) 00:11:45.085 fused_ordering(900) 00:11:45.085 fused_ordering(901) 00:11:45.085 fused_ordering(902) 00:11:45.085 fused_ordering(903) 00:11:45.085 fused_ordering(904) 00:11:45.085 fused_ordering(905) 00:11:45.085 fused_ordering(906) 00:11:45.085 fused_ordering(907) 00:11:45.085 fused_ordering(908) 00:11:45.085 fused_ordering(909) 00:11:45.085 fused_ordering(910) 00:11:45.085 fused_ordering(911) 00:11:45.085 fused_ordering(912) 00:11:45.085 fused_ordering(913) 00:11:45.085 fused_ordering(914) 00:11:45.085 fused_ordering(915) 00:11:45.085 fused_ordering(916) 00:11:45.085 fused_ordering(917) 00:11:45.085 fused_ordering(918) 00:11:45.085 fused_ordering(919) 00:11:45.085 fused_ordering(920) 00:11:45.085 fused_ordering(921) 00:11:45.085 fused_ordering(922) 00:11:45.085 fused_ordering(923) 00:11:45.085 fused_ordering(924) 00:11:45.085 fused_ordering(925) 00:11:45.085 fused_ordering(926) 00:11:45.085 fused_ordering(927) 00:11:45.085 fused_ordering(928) 00:11:45.085 fused_ordering(929) 00:11:45.085 fused_ordering(930) 00:11:45.085 fused_ordering(931) 00:11:45.085 fused_ordering(932) 00:11:45.085 fused_ordering(933) 00:11:45.085 fused_ordering(934) 00:11:45.085 fused_ordering(935) 00:11:45.085 fused_ordering(936) 00:11:45.085 fused_ordering(937) 00:11:45.085 fused_ordering(938) 00:11:45.085 fused_ordering(939) 00:11:45.085 fused_ordering(940) 00:11:45.085 fused_ordering(941) 00:11:45.085 fused_ordering(942) 00:11:45.085 fused_ordering(943) 00:11:45.085 fused_ordering(944) 00:11:45.085 fused_ordering(945) 00:11:45.085 fused_ordering(946) 00:11:45.085 fused_ordering(947) 00:11:45.085 fused_ordering(948) 00:11:45.085 fused_ordering(949) 00:11:45.085 fused_ordering(950) 00:11:45.085 fused_ordering(951) 00:11:45.085 fused_ordering(952) 00:11:45.085 fused_ordering(953) 00:11:45.085 fused_ordering(954) 00:11:45.085 fused_ordering(955) 00:11:45.085 fused_ordering(956) 00:11:45.085 fused_ordering(957) 00:11:45.085 fused_ordering(958) 00:11:45.085 fused_ordering(959) 00:11:45.085 fused_ordering(960) 00:11:45.085 fused_ordering(961) 00:11:45.085 fused_ordering(962) 00:11:45.085 fused_ordering(963) 00:11:45.085 fused_ordering(964) 00:11:45.085 fused_ordering(965) 00:11:45.085 fused_ordering(966) 00:11:45.085 fused_ordering(967) 00:11:45.085 fused_ordering(968) 00:11:45.085 fused_ordering(969) 00:11:45.085 fused_ordering(970) 00:11:45.085 fused_ordering(971) 00:11:45.085 fused_ordering(972) 00:11:45.085 fused_ordering(973) 00:11:45.085 fused_ordering(974) 00:11:45.085 fused_ordering(975) 00:11:45.085 fused_ordering(976) 00:11:45.085 fused_ordering(977) 00:11:45.085 fused_ordering(978) 00:11:45.085 fused_ordering(979) 00:11:45.085 fused_ordering(980) 00:11:45.085 fused_ordering(981) 00:11:45.085 fused_ordering(982) 00:11:45.085 fused_ordering(983) 00:11:45.085 fused_ordering(984) 00:11:45.085 fused_ordering(985) 00:11:45.085 fused_ordering(986) 00:11:45.085 fused_ordering(987) 00:11:45.085 fused_ordering(988) 00:11:45.085 fused_ordering(989) 00:11:45.085 fused_ordering(990) 00:11:45.085 fused_ordering(991) 00:11:45.085 fused_ordering(992) 00:11:45.085 fused_ordering(993) 00:11:45.085 fused_ordering(994) 00:11:45.085 fused_ordering(995) 00:11:45.085 fused_ordering(996) 00:11:45.085 fused_ordering(997) 00:11:45.085 fused_ordering(998) 00:11:45.085 fused_ordering(999) 00:11:45.085 fused_ordering(1000) 00:11:45.085 fused_ordering(1001) 00:11:45.085 fused_ordering(1002) 00:11:45.085 fused_ordering(1003) 00:11:45.085 fused_ordering(1004) 00:11:45.085 fused_ordering(1005) 00:11:45.085 fused_ordering(1006) 00:11:45.085 fused_ordering(1007) 00:11:45.085 fused_ordering(1008) 00:11:45.085 fused_ordering(1009) 00:11:45.085 fused_ordering(1010) 00:11:45.085 fused_ordering(1011) 00:11:45.085 fused_ordering(1012) 00:11:45.085 fused_ordering(1013) 00:11:45.085 fused_ordering(1014) 00:11:45.085 fused_ordering(1015) 00:11:45.085 fused_ordering(1016) 00:11:45.085 fused_ordering(1017) 00:11:45.085 fused_ordering(1018) 00:11:45.085 fused_ordering(1019) 00:11:45.085 fused_ordering(1020) 00:11:45.085 fused_ordering(1021) 00:11:45.085 fused_ordering(1022) 00:11:45.085 fused_ordering(1023) 00:11:45.085 18:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:45.085 18:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:45.085 18:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:45.085 18:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:11:45.085 18:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:45.085 18:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:11:45.085 18:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:45.085 18:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:45.085 rmmod nvme_tcp 00:11:45.085 rmmod nvme_fabrics 00:11:45.085 rmmod nvme_keyring 00:11:45.085 18:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:45.085 18:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:11:45.085 18:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:11:45.085 18:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1434873 ']' 00:11:45.085 18:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1434873 00:11:45.085 18:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1434873 ']' 00:11:45.085 18:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1434873 00:11:45.085 18:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:11:45.085 18:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:45.085 18:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1434873 00:11:45.085 18:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:45.085 18:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:45.085 18:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1434873' 00:11:45.085 killing process with pid 1434873 00:11:45.085 18:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1434873 00:11:45.085 18:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1434873 00:11:45.344 18:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:45.344 18:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:45.344 18:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:45.344 18:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:11:45.344 18:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:11:45.344 18:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:45.344 18:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:11:45.344 18:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:45.344 18:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:45.344 18:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.344 18:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.344 18:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.251 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:47.251 00:11:47.251 real 0m7.499s 00:11:47.251 user 0m5.076s 00:11:47.251 sys 0m3.135s 00:11:47.251 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.251 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:47.251 ************************************ 00:11:47.251 END TEST nvmf_fused_ordering 00:11:47.251 ************************************ 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:47.510 ************************************ 00:11:47.510 START TEST nvmf_ns_masking 00:11:47.510 ************************************ 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:47.510 * Looking for test storage... 00:11:47.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:47.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.510 --rc genhtml_branch_coverage=1 00:11:47.510 --rc genhtml_function_coverage=1 00:11:47.510 --rc genhtml_legend=1 00:11:47.510 --rc geninfo_all_blocks=1 00:11:47.510 --rc geninfo_unexecuted_blocks=1 00:11:47.510 00:11:47.510 ' 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:47.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.510 --rc genhtml_branch_coverage=1 00:11:47.510 --rc genhtml_function_coverage=1 00:11:47.510 --rc genhtml_legend=1 00:11:47.510 --rc geninfo_all_blocks=1 00:11:47.510 --rc geninfo_unexecuted_blocks=1 00:11:47.510 00:11:47.510 ' 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:47.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.510 --rc genhtml_branch_coverage=1 00:11:47.510 --rc genhtml_function_coverage=1 00:11:47.510 --rc genhtml_legend=1 00:11:47.510 --rc geninfo_all_blocks=1 00:11:47.510 --rc geninfo_unexecuted_blocks=1 00:11:47.510 00:11:47.510 ' 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:47.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.510 --rc genhtml_branch_coverage=1 00:11:47.510 --rc genhtml_function_coverage=1 00:11:47.510 --rc genhtml_legend=1 00:11:47.510 --rc geninfo_all_blocks=1 00:11:47.510 --rc geninfo_unexecuted_blocks=1 00:11:47.510 00:11:47.510 ' 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.510 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:47.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=39fddfc7-d08d-4686-88c8-f0726d121b06 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=cb845c46-7f9d-4a43-8895-d5293ebd98d1 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=d14d9ce8-3288-414f-95af-4c9336000373 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:11:47.511 18:01:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:50.042 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:50.042 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:50.042 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:50.042 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:50.042 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:50.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:11:50.043 00:11:50.043 --- 10.0.0.2 ping statistics --- 00:11:50.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.043 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:50.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:11:50.043 00:11:50.043 --- 10.0.0.1 ping statistics --- 00:11:50.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.043 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1437142 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1437142 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1437142 ']' 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.043 18:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:50.043 [2024-12-09 18:01:12.883186] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:11:50.043 [2024-12-09 18:01:12.883271] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.043 [2024-12-09 18:01:12.956446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.043 [2024-12-09 18:01:13.014483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.043 [2024-12-09 18:01:13.014543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.043 [2024-12-09 18:01:13.014565] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.043 [2024-12-09 18:01:13.014576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.043 [2024-12-09 18:01:13.014586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.043 [2024-12-09 18:01:13.015182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.301 18:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.301 18:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:50.301 18:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:50.301 18:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:50.301 18:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:50.301 18:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.301 18:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:50.558 [2024-12-09 18:01:13.397429] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.558 18:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:50.558 18:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:50.558 18:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:50.816 Malloc1 00:11:50.816 18:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:51.075 Malloc2 00:11:51.075 18:01:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:51.332 18:01:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:51.590 18:01:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.848 [2024-12-09 18:01:14.806222] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.848 18:01:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:51.848 18:01:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d14d9ce8-3288-414f-95af-4c9336000373 -a 10.0.0.2 -s 4420 -i 4 00:11:52.105 18:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:52.105 18:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:52.105 18:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.105 18:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:52.105 18:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:54.002 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:54.259 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:54.259 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.259 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:54.259 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.259 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:54.259 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:54.259 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:54.259 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:54.259 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:54.259 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:54.259 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.259 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:54.259 [ 0]:0x1 00:11:54.259 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:54.259 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.259 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6b05423617ca4f20ae4cb63e9ad34906 00:11:54.259 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6b05423617ca4f20ae4cb63e9ad34906 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.260 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:54.518 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:54.518 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.518 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:54.518 [ 0]:0x1 00:11:54.518 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:54.518 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.776 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6b05423617ca4f20ae4cb63e9ad34906 00:11:54.776 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6b05423617ca4f20ae4cb63e9ad34906 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.776 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:54.776 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.776 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:54.776 [ 1]:0x2 00:11:54.776 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:54.776 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.776 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6d4a030c37ba467a8019dff8bb82b6a9 00:11:54.776 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6d4a030c37ba467a8019dff8bb82b6a9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.776 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:54.776 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:55.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.033 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:55.290 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:55.549 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:55.549 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d14d9ce8-3288-414f-95af-4c9336000373 -a 10.0.0.2 -s 4420 -i 4 00:11:55.549 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:55.549 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:55.549 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:55.549 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:11:55.549 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:11:55.549 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:58.077 [ 0]:0x2 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6d4a030c37ba467a8019dff8bb82b6a9 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6d4a030c37ba467a8019dff8bb82b6a9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:58.077 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:58.077 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:58.077 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:58.077 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:58.077 [ 0]:0x1 00:11:58.077 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:58.078 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:58.078 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6b05423617ca4f20ae4cb63e9ad34906 00:11:58.078 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6b05423617ca4f20ae4cb63e9ad34906 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:58.078 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:58.078 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:58.078 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:58.078 [ 1]:0x2 00:11:58.078 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:58.078 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:58.335 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6d4a030c37ba467a8019dff8bb82b6a9 00:11:58.335 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6d4a030c37ba467a8019dff8bb82b6a9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:58.335 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:58.593 [ 0]:0x2 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6d4a030c37ba467a8019dff8bb82b6a9 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6d4a030c37ba467a8019dff8bb82b6a9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:58.593 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:58.594 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:58.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.594 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:58.851 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:58.851 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d14d9ce8-3288-414f-95af-4c9336000373 -a 10.0.0.2 -s 4420 -i 4 00:11:59.109 18:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:59.109 18:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:59.109 18:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:59.109 18:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:11:59.109 18:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:11:59.109 18:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:01.636 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:01.637 [ 0]:0x1 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6b05423617ca4f20ae4cb63e9ad34906 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6b05423617ca4f20ae4cb63e9ad34906 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:01.637 [ 1]:0x2 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6d4a030c37ba467a8019dff8bb82b6a9 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6d4a030c37ba467a8019dff8bb82b6a9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:01.637 [ 0]:0x2 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6d4a030c37ba467a8019dff8bb82b6a9 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6d4a030c37ba467a8019dff8bb82b6a9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:01.637 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:01.895 [2024-12-09 18:01:24.876173] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:01.895 request: 00:12:01.895 { 00:12:01.895 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:01.895 "nsid": 2, 00:12:01.895 "host": "nqn.2016-06.io.spdk:host1", 00:12:01.895 "method": "nvmf_ns_remove_host", 00:12:01.895 "req_id": 1 00:12:01.895 } 00:12:01.895 Got JSON-RPC error response 00:12:01.895 response: 00:12:01.895 { 00:12:01.895 "code": -32602, 00:12:01.895 "message": "Invalid parameters" 00:12:01.895 } 00:12:01.895 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:01.895 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:01.895 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:01.895 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:01.895 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:01.895 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:01.895 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:01.895 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:01.895 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.895 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:01.895 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.895 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:01.895 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:01.895 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:01.895 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:01.895 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:02.153 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:02.153 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:02.153 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:02.153 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:02.153 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:02.153 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:02.153 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:02.153 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:02.153 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:02.153 [ 0]:0x2 00:12:02.153 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:02.153 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:02.153 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6d4a030c37ba467a8019dff8bb82b6a9 00:12:02.153 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6d4a030c37ba467a8019dff8bb82b6a9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:02.153 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:02.153 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.153 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1438767 00:12:02.153 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:02.153 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.153 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1438767 /var/tmp/host.sock 00:12:02.153 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1438767 ']' 00:12:02.153 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:02.153 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.153 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:02.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:02.153 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.153 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:02.153 [2024-12-09 18:01:25.098246] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:12:02.153 [2024-12-09 18:01:25.098346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1438767 ] 00:12:02.153 [2024-12-09 18:01:25.165184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.412 [2024-12-09 18:01:25.222268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.670 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.670 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:02.670 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.927 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:03.185 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 39fddfc7-d08d-4686-88c8-f0726d121b06 00:12:03.185 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:03.185 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 39FDDFC7D08D468688C8F0726D121B06 -i 00:12:03.443 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid cb845c46-7f9d-4a43-8895-d5293ebd98d1 00:12:03.443 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:03.443 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g CB845C467F9D4A438895D5293EBD98D1 -i 00:12:03.701 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:03.958 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:04.216 18:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:04.216 18:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:04.782 nvme0n1 00:12:04.782 18:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:04.782 18:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:05.040 nvme1n2 00:12:05.040 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:05.040 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:05.040 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:05.040 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:05.040 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:05.297 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:05.297 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:05.297 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:05.297 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:05.555 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 39fddfc7-d08d-4686-88c8-f0726d121b06 == \3\9\f\d\d\f\c\7\-\d\0\8\d\-\4\6\8\6\-\8\8\c\8\-\f\0\7\2\6\d\1\2\1\b\0\6 ]] 00:12:05.555 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:05.555 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:05.555 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:05.813 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ cb845c46-7f9d-4a43-8895-d5293ebd98d1 == \c\b\8\4\5\c\4\6\-\7\f\9\d\-\4\a\4\3\-\8\8\9\5\-\d\5\2\9\3\e\b\d\9\8\d\1 ]] 00:12:05.813 18:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.379 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:06.379 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 39fddfc7-d08d-4686-88c8-f0726d121b06 00:12:06.379 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:06.379 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 39FDDFC7D08D468688C8F0726D121B06 00:12:06.379 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:06.379 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 39FDDFC7D08D468688C8F0726D121B06 00:12:06.379 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.379 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:06.379 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.379 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:06.379 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.379 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:06.379 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.379 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:06.379 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 39FDDFC7D08D468688C8F0726D121B06 00:12:06.638 [2024-12-09 18:01:29.642222] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:06.638 [2024-12-09 18:01:29.642258] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:06.638 [2024-12-09 18:01:29.642277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.638 request: 00:12:06.638 { 00:12:06.638 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:06.638 "namespace": { 00:12:06.638 "bdev_name": "invalid", 00:12:06.638 "nsid": 1, 00:12:06.638 "nguid": "39FDDFC7D08D468688C8F0726D121B06", 00:12:06.638 "no_auto_visible": false, 00:12:06.638 "hide_metadata": false 00:12:06.638 }, 00:12:06.638 "method": "nvmf_subsystem_add_ns", 00:12:06.638 "req_id": 1 00:12:06.638 } 00:12:06.638 Got JSON-RPC error response 00:12:06.638 response: 00:12:06.638 { 00:12:06.638 "code": -32602, 00:12:06.638 "message": "Invalid parameters" 00:12:06.638 } 00:12:06.638 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:06.638 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:06.638 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:06.638 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:06.638 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 39fddfc7-d08d-4686-88c8-f0726d121b06 00:12:06.638 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:06.638 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 39FDDFC7D08D468688C8F0726D121B06 -i 00:12:06.895 18:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:09.423 18:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:09.423 18:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:09.423 18:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:09.423 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:09.423 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1438767 00:12:09.423 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1438767 ']' 00:12:09.423 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1438767 00:12:09.423 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:09.423 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:09.423 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1438767 00:12:09.423 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:09.423 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:09.423 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1438767' 00:12:09.423 killing process with pid 1438767 00:12:09.423 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1438767 00:12:09.423 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1438767 00:12:09.681 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.939 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:09.939 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:09.939 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:09.939 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:09.939 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:09.939 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:09.939 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:09.939 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:09.939 rmmod nvme_tcp 00:12:09.939 rmmod nvme_fabrics 00:12:10.197 rmmod nvme_keyring 00:12:10.197 18:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:10.197 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:10.197 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:10.197 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1437142 ']' 00:12:10.197 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1437142 00:12:10.197 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1437142 ']' 00:12:10.197 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1437142 00:12:10.197 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:10.197 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.197 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1437142 00:12:10.197 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.197 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.197 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1437142' 00:12:10.197 killing process with pid 1437142 00:12:10.197 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1437142 00:12:10.197 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1437142 00:12:10.456 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:10.456 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:10.456 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:10.456 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:12:10.456 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:12:10.456 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:10.456 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:12:10.456 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:10.456 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:10.456 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.456 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.456 18:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.363 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:12.363 00:12:12.363 real 0m25.047s 00:12:12.363 user 0m36.199s 00:12:12.363 sys 0m4.713s 00:12:12.363 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.363 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:12.363 ************************************ 00:12:12.363 END TEST nvmf_ns_masking 00:12:12.363 ************************************ 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:12.623 ************************************ 00:12:12.623 START TEST nvmf_nvme_cli 00:12:12.623 ************************************ 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:12.623 * Looking for test storage... 00:12:12.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:12.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.623 --rc genhtml_branch_coverage=1 00:12:12.623 --rc genhtml_function_coverage=1 00:12:12.623 --rc genhtml_legend=1 00:12:12.623 --rc geninfo_all_blocks=1 00:12:12.623 --rc geninfo_unexecuted_blocks=1 00:12:12.623 00:12:12.623 ' 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:12.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.623 --rc genhtml_branch_coverage=1 00:12:12.623 --rc genhtml_function_coverage=1 00:12:12.623 --rc genhtml_legend=1 00:12:12.623 --rc geninfo_all_blocks=1 00:12:12.623 --rc geninfo_unexecuted_blocks=1 00:12:12.623 00:12:12.623 ' 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:12.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.623 --rc genhtml_branch_coverage=1 00:12:12.623 --rc genhtml_function_coverage=1 00:12:12.623 --rc genhtml_legend=1 00:12:12.623 --rc geninfo_all_blocks=1 00:12:12.623 --rc geninfo_unexecuted_blocks=1 00:12:12.623 00:12:12.623 ' 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:12.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.623 --rc genhtml_branch_coverage=1 00:12:12.623 --rc genhtml_function_coverage=1 00:12:12.623 --rc genhtml_legend=1 00:12:12.623 --rc geninfo_all_blocks=1 00:12:12.623 --rc geninfo_unexecuted_blocks=1 00:12:12.623 00:12:12.623 ' 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.623 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:12.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:12:12.624 18:01:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:15.194 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:15.194 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:15.194 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:15.194 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:15.195 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:15.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:12:15.195 00:12:15.195 --- 10.0.0.2 ping statistics --- 00:12:15.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.195 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:12:15.195 00:12:15.195 --- 10.0.0.1 ping statistics --- 00:12:15.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.195 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1441687 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1441687 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1441687 ']' 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.195 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.195 [2024-12-09 18:01:38.045459] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:12:15.195 [2024-12-09 18:01:38.045531] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.195 [2024-12-09 18:01:38.121184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.195 [2024-12-09 18:01:38.177570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.195 [2024-12-09 18:01:38.177628] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.195 [2024-12-09 18:01:38.177653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.195 [2024-12-09 18:01:38.177664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.195 [2024-12-09 18:01:38.177674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.195 [2024-12-09 18:01:38.179081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.195 [2024-12-09 18:01:38.179144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.195 [2024-12-09 18:01:38.179215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.195 [2024-12-09 18:01:38.179213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.474 [2024-12-09 18:01:38.328960] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.474 Malloc0 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.474 Malloc1 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.474 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.475 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:15.475 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.475 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.475 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.475 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.475 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.475 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.475 [2024-12-09 18:01:38.425004] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.475 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.475 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:15.475 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.475 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.475 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.475 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:15.733 00:12:15.733 Discovery Log Number of Records 2, Generation counter 2 00:12:15.733 =====Discovery Log Entry 0====== 00:12:15.733 trtype: tcp 00:12:15.733 adrfam: ipv4 00:12:15.733 subtype: current discovery subsystem 00:12:15.733 treq: not required 00:12:15.733 portid: 0 00:12:15.733 trsvcid: 4420 00:12:15.733 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:15.733 traddr: 10.0.0.2 00:12:15.733 eflags: explicit discovery connections, duplicate discovery information 00:12:15.733 sectype: none 00:12:15.733 =====Discovery Log Entry 1====== 00:12:15.733 trtype: tcp 00:12:15.733 adrfam: ipv4 00:12:15.733 subtype: nvme subsystem 00:12:15.733 treq: not required 00:12:15.733 portid: 0 00:12:15.733 trsvcid: 4420 00:12:15.733 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:15.733 traddr: 10.0.0.2 00:12:15.733 eflags: none 00:12:15.733 sectype: none 00:12:15.733 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:15.733 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:15.733 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:15.733 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:15.733 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:15.733 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:15.733 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:15.733 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:15.733 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:15.733 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:15.733 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:16.299 18:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:16.299 18:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:12:16.299 18:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:16.299 18:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:16.299 18:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:16.299 18:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:12:18.198 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:18.198 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:18.198 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:18.198 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:18.198 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:18.198 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:12:18.198 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:18.198 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:18.198 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:18.198 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:12:18.456 /dev/nvme0n2 ]] 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:18.456 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:18.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.715 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:18.715 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:12:18.715 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:18.715 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.715 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:18.715 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.715 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:12:18.715 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:18.715 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:18.715 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.715 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.715 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.715 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:18.715 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:18.715 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:18.715 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:12:18.973 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:18.973 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:12:18.973 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:18.973 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:18.973 rmmod nvme_tcp 00:12:18.973 rmmod nvme_fabrics 00:12:18.973 rmmod nvme_keyring 00:12:18.973 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:18.973 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:12:18.973 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:12:18.973 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1441687 ']' 00:12:18.973 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1441687 00:12:18.973 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1441687 ']' 00:12:18.973 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1441687 00:12:18.973 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:12:18.973 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.973 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1441687 00:12:18.973 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.973 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.973 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1441687' 00:12:18.973 killing process with pid 1441687 00:12:18.973 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1441687 00:12:18.973 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1441687 00:12:19.232 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:19.232 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:19.232 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:19.232 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:12:19.232 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:12:19.232 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:12:19.232 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:19.232 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:19.232 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:19.232 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.232 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.232 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.771 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:21.771 00:12:21.771 real 0m8.757s 00:12:21.771 user 0m16.416s 00:12:21.772 sys 0m2.333s 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.772 ************************************ 00:12:21.772 END TEST nvmf_nvme_cli 00:12:21.772 ************************************ 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:21.772 ************************************ 00:12:21.772 START TEST nvmf_vfio_user 00:12:21.772 ************************************ 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:21.772 * Looking for test storage... 00:12:21.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:21.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.772 --rc genhtml_branch_coverage=1 00:12:21.772 --rc genhtml_function_coverage=1 00:12:21.772 --rc genhtml_legend=1 00:12:21.772 --rc geninfo_all_blocks=1 00:12:21.772 --rc geninfo_unexecuted_blocks=1 00:12:21.772 00:12:21.772 ' 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:21.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.772 --rc genhtml_branch_coverage=1 00:12:21.772 --rc genhtml_function_coverage=1 00:12:21.772 --rc genhtml_legend=1 00:12:21.772 --rc geninfo_all_blocks=1 00:12:21.772 --rc geninfo_unexecuted_blocks=1 00:12:21.772 00:12:21.772 ' 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:21.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.772 --rc genhtml_branch_coverage=1 00:12:21.772 --rc genhtml_function_coverage=1 00:12:21.772 --rc genhtml_legend=1 00:12:21.772 --rc geninfo_all_blocks=1 00:12:21.772 --rc geninfo_unexecuted_blocks=1 00:12:21.772 00:12:21.772 ' 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:21.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.772 --rc genhtml_branch_coverage=1 00:12:21.772 --rc genhtml_function_coverage=1 00:12:21.772 --rc genhtml_legend=1 00:12:21.772 --rc geninfo_all_blocks=1 00:12:21.772 --rc geninfo_unexecuted_blocks=1 00:12:21.772 00:12:21.772 ' 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.772 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:21.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1442622 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1442622' 00:12:21.773 Process pid: 1442622 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1442622 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1442622 ']' 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:21.773 [2024-12-09 18:01:44.433777] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:12:21.773 [2024-12-09 18:01:44.433883] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.773 [2024-12-09 18:01:44.504925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.773 [2024-12-09 18:01:44.566453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.773 [2024-12-09 18:01:44.566516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.773 [2024-12-09 18:01:44.566552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.773 [2024-12-09 18:01:44.566567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.773 [2024-12-09 18:01:44.566576] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.773 [2024-12-09 18:01:44.570565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.773 [2024-12-09 18:01:44.570636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.773 [2024-12-09 18:01:44.570707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.773 [2024-12-09 18:01:44.570710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:12:21.773 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:22.705 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:22.962 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:22.962 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:22.962 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:22.962 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:22.962 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:23.220 Malloc1 00:12:23.478 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:23.736 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:23.993 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:24.251 18:01:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:24.251 18:01:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:24.251 18:01:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:24.509 Malloc2 00:12:24.509 18:01:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:24.770 18:01:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:25.032 18:01:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:25.291 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:25.291 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:25.291 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:25.291 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:25.291 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:25.291 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:25.291 [2024-12-09 18:01:48.191443] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:12:25.291 [2024-12-09 18:01:48.191485] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1443043 ] 00:12:25.291 [2024-12-09 18:01:48.242835] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:25.291 [2024-12-09 18:01:48.248021] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:25.291 [2024-12-09 18:01:48.248062] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd24bcee000 00:12:25.291 [2024-12-09 18:01:48.249017] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:25.291 [2024-12-09 18:01:48.250024] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:25.291 [2024-12-09 18:01:48.251017] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:25.291 [2024-12-09 18:01:48.252020] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:25.291 [2024-12-09 18:01:48.253027] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:25.291 [2024-12-09 18:01:48.254025] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:25.291 [2024-12-09 18:01:48.255036] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:25.291 [2024-12-09 18:01:48.256045] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:25.291 [2024-12-09 18:01:48.257051] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:25.291 [2024-12-09 18:01:48.257071] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd24bce3000 00:12:25.291 [2024-12-09 18:01:48.258196] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:25.291 [2024-12-09 18:01:48.273240] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:25.291 [2024-12-09 18:01:48.273287] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:12:25.291 [2024-12-09 18:01:48.278174] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:25.291 [2024-12-09 18:01:48.278238] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:25.291 [2024-12-09 18:01:48.278361] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:12:25.291 [2024-12-09 18:01:48.278390] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:12:25.291 [2024-12-09 18:01:48.278401] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:12:25.291 [2024-12-09 18:01:48.280557] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:25.292 [2024-12-09 18:01:48.280580] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:12:25.292 [2024-12-09 18:01:48.280594] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:12:25.292 [2024-12-09 18:01:48.281180] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:25.292 [2024-12-09 18:01:48.281199] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:12:25.292 [2024-12-09 18:01:48.281212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:25.292 [2024-12-09 18:01:48.282185] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:25.292 [2024-12-09 18:01:48.282206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:25.292 [2024-12-09 18:01:48.283191] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:25.292 [2024-12-09 18:01:48.283210] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:25.292 [2024-12-09 18:01:48.283219] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:25.292 [2024-12-09 18:01:48.283234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:25.292 [2024-12-09 18:01:48.283345] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:12:25.292 [2024-12-09 18:01:48.283353] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:25.292 [2024-12-09 18:01:48.283361] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:25.292 [2024-12-09 18:01:48.284202] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:25.292 [2024-12-09 18:01:48.285214] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:25.292 [2024-12-09 18:01:48.286213] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:25.292 [2024-12-09 18:01:48.287206] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:25.292 [2024-12-09 18:01:48.287328] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:25.292 [2024-12-09 18:01:48.288221] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:25.292 [2024-12-09 18:01:48.288240] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:25.292 [2024-12-09 18:01:48.288249] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:25.292 [2024-12-09 18:01:48.288273] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:12:25.292 [2024-12-09 18:01:48.288287] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:25.292 [2024-12-09 18:01:48.288319] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:25.292 [2024-12-09 18:01:48.288328] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:25.292 [2024-12-09 18:01:48.288335] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:25.292 [2024-12-09 18:01:48.288366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:25.292 [2024-12-09 18:01:48.288446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:25.292 [2024-12-09 18:01:48.288465] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:12:25.292 [2024-12-09 18:01:48.288477] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:12:25.292 [2024-12-09 18:01:48.288485] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:12:25.292 [2024-12-09 18:01:48.288493] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:25.292 [2024-12-09 18:01:48.288500] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:12:25.292 [2024-12-09 18:01:48.288507] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:12:25.292 [2024-12-09 18:01:48.288515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:12:25.292 [2024-12-09 18:01:48.288554] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:25.292 [2024-12-09 18:01:48.288573] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:25.292 [2024-12-09 18:01:48.288591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:25.292 [2024-12-09 18:01:48.288609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.292 [2024-12-09 18:01:48.288622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.292 [2024-12-09 18:01:48.288635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.292 [2024-12-09 18:01:48.288647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.292 [2024-12-09 18:01:48.288655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:25.292 [2024-12-09 18:01:48.288671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:25.292 [2024-12-09 18:01:48.288687] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:25.292 [2024-12-09 18:01:48.288699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:25.292 [2024-12-09 18:01:48.288710] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:12:25.292 [2024-12-09 18:01:48.288719] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:25.292 [2024-12-09 18:01:48.288730] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:12:25.292 [2024-12-09 18:01:48.288740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:25.292 [2024-12-09 18:01:48.288753] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:25.292 [2024-12-09 18:01:48.288764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:25.292 [2024-12-09 18:01:48.288849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:12:25.292 [2024-12-09 18:01:48.288878] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:25.292 [2024-12-09 18:01:48.288907] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:25.292 [2024-12-09 18:01:48.288915] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:25.292 [2024-12-09 18:01:48.288921] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:25.292 [2024-12-09 18:01:48.288930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:25.292 [2024-12-09 18:01:48.288944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:25.292 [2024-12-09 18:01:48.288962] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:12:25.292 [2024-12-09 18:01:48.288989] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:12:25.292 [2024-12-09 18:01:48.289004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:25.292 [2024-12-09 18:01:48.289016] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:25.292 [2024-12-09 18:01:48.289023] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:25.292 [2024-12-09 18:01:48.289029] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:25.292 [2024-12-09 18:01:48.289038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:25.292 [2024-12-09 18:01:48.289072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:25.292 [2024-12-09 18:01:48.289095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:25.292 [2024-12-09 18:01:48.289109] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:25.292 [2024-12-09 18:01:48.289121] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:25.292 [2024-12-09 18:01:48.289128] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:25.292 [2024-12-09 18:01:48.289134] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:25.292 [2024-12-09 18:01:48.289151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:25.292 [2024-12-09 18:01:48.289166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:25.292 [2024-12-09 18:01:48.289180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:25.292 [2024-12-09 18:01:48.289190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:25.292 [2024-12-09 18:01:48.289203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:12:25.292 [2024-12-09 18:01:48.289216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:25.292 [2024-12-09 18:01:48.289225] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:25.292 [2024-12-09 18:01:48.289233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:12:25.293 [2024-12-09 18:01:48.289242] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:25.293 [2024-12-09 18:01:48.289249] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:12:25.293 [2024-12-09 18:01:48.289257] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:12:25.293 [2024-12-09 18:01:48.289285] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:25.293 [2024-12-09 18:01:48.289302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:25.293 [2024-12-09 18:01:48.289325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:25.293 [2024-12-09 18:01:48.289337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:25.293 [2024-12-09 18:01:48.289353] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:25.293 [2024-12-09 18:01:48.289367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:25.293 [2024-12-09 18:01:48.289383] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:25.293 [2024-12-09 18:01:48.289394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:25.293 [2024-12-09 18:01:48.289418] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:25.293 [2024-12-09 18:01:48.289429] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:25.293 [2024-12-09 18:01:48.289435] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:25.293 [2024-12-09 18:01:48.289440] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:25.293 [2024-12-09 18:01:48.289446] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:25.293 [2024-12-09 18:01:48.289454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:25.293 [2024-12-09 18:01:48.289466] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:25.293 [2024-12-09 18:01:48.289474] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:25.293 [2024-12-09 18:01:48.289479] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:25.293 [2024-12-09 18:01:48.289488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:25.293 [2024-12-09 18:01:48.289499] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:25.293 [2024-12-09 18:01:48.289506] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:25.293 [2024-12-09 18:01:48.289512] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:25.293 [2024-12-09 18:01:48.289520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:25.293 [2024-12-09 18:01:48.289558] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:25.293 [2024-12-09 18:01:48.289569] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:25.293 [2024-12-09 18:01:48.289575] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:25.293 [2024-12-09 18:01:48.289584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:25.293 [2024-12-09 18:01:48.289595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:25.293 [2024-12-09 18:01:48.289615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:25.293 [2024-12-09 18:01:48.289633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:25.293 [2024-12-09 18:01:48.289645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:25.293 ===================================================== 00:12:25.293 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:25.293 ===================================================== 00:12:25.293 Controller Capabilities/Features 00:12:25.293 ================================ 00:12:25.293 Vendor ID: 4e58 00:12:25.293 Subsystem Vendor ID: 4e58 00:12:25.293 Serial Number: SPDK1 00:12:25.293 Model Number: SPDK bdev Controller 00:12:25.293 Firmware Version: 25.01 00:12:25.293 Recommended Arb Burst: 6 00:12:25.293 IEEE OUI Identifier: 8d 6b 50 00:12:25.293 Multi-path I/O 00:12:25.293 May have multiple subsystem ports: Yes 00:12:25.293 May have multiple controllers: Yes 00:12:25.293 Associated with SR-IOV VF: No 00:12:25.293 Max Data Transfer Size: 131072 00:12:25.293 Max Number of Namespaces: 32 00:12:25.293 Max Number of I/O Queues: 127 00:12:25.293 NVMe Specification Version (VS): 1.3 00:12:25.293 NVMe Specification Version (Identify): 1.3 00:12:25.293 Maximum Queue Entries: 256 00:12:25.293 Contiguous Queues Required: Yes 00:12:25.293 Arbitration Mechanisms Supported 00:12:25.293 Weighted Round Robin: Not Supported 00:12:25.293 Vendor Specific: Not Supported 00:12:25.293 Reset Timeout: 15000 ms 00:12:25.293 Doorbell Stride: 4 bytes 00:12:25.293 NVM Subsystem Reset: Not Supported 00:12:25.293 Command Sets Supported 00:12:25.293 NVM Command Set: Supported 00:12:25.293 Boot Partition: Not Supported 00:12:25.293 Memory Page Size Minimum: 4096 bytes 00:12:25.293 Memory Page Size Maximum: 4096 bytes 00:12:25.293 Persistent Memory Region: Not Supported 00:12:25.293 Optional Asynchronous Events Supported 00:12:25.293 Namespace Attribute Notices: Supported 00:12:25.293 Firmware Activation Notices: Not Supported 00:12:25.293 ANA Change Notices: Not Supported 00:12:25.293 PLE Aggregate Log Change Notices: Not Supported 00:12:25.293 LBA Status Info Alert Notices: Not Supported 00:12:25.293 EGE Aggregate Log Change Notices: Not Supported 00:12:25.293 Normal NVM Subsystem Shutdown event: Not Supported 00:12:25.293 Zone Descriptor Change Notices: Not Supported 00:12:25.293 Discovery Log Change Notices: Not Supported 00:12:25.293 Controller Attributes 00:12:25.293 128-bit Host Identifier: Supported 00:12:25.293 Non-Operational Permissive Mode: Not Supported 00:12:25.293 NVM Sets: Not Supported 00:12:25.293 Read Recovery Levels: Not Supported 00:12:25.293 Endurance Groups: Not Supported 00:12:25.293 Predictable Latency Mode: Not Supported 00:12:25.293 Traffic Based Keep ALive: Not Supported 00:12:25.293 Namespace Granularity: Not Supported 00:12:25.293 SQ Associations: Not Supported 00:12:25.293 UUID List: Not Supported 00:12:25.293 Multi-Domain Subsystem: Not Supported 00:12:25.293 Fixed Capacity Management: Not Supported 00:12:25.293 Variable Capacity Management: Not Supported 00:12:25.293 Delete Endurance Group: Not Supported 00:12:25.293 Delete NVM Set: Not Supported 00:12:25.293 Extended LBA Formats Supported: Not Supported 00:12:25.293 Flexible Data Placement Supported: Not Supported 00:12:25.293 00:12:25.293 Controller Memory Buffer Support 00:12:25.293 ================================ 00:12:25.293 Supported: No 00:12:25.293 00:12:25.293 Persistent Memory Region Support 00:12:25.293 ================================ 00:12:25.293 Supported: No 00:12:25.293 00:12:25.293 Admin Command Set Attributes 00:12:25.293 ============================ 00:12:25.293 Security Send/Receive: Not Supported 00:12:25.293 Format NVM: Not Supported 00:12:25.293 Firmware Activate/Download: Not Supported 00:12:25.293 Namespace Management: Not Supported 00:12:25.293 Device Self-Test: Not Supported 00:12:25.293 Directives: Not Supported 00:12:25.293 NVMe-MI: Not Supported 00:12:25.293 Virtualization Management: Not Supported 00:12:25.293 Doorbell Buffer Config: Not Supported 00:12:25.293 Get LBA Status Capability: Not Supported 00:12:25.293 Command & Feature Lockdown Capability: Not Supported 00:12:25.293 Abort Command Limit: 4 00:12:25.293 Async Event Request Limit: 4 00:12:25.293 Number of Firmware Slots: N/A 00:12:25.293 Firmware Slot 1 Read-Only: N/A 00:12:25.293 Firmware Activation Without Reset: N/A 00:12:25.293 Multiple Update Detection Support: N/A 00:12:25.293 Firmware Update Granularity: No Information Provided 00:12:25.293 Per-Namespace SMART Log: No 00:12:25.293 Asymmetric Namespace Access Log Page: Not Supported 00:12:25.293 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:25.293 Command Effects Log Page: Supported 00:12:25.293 Get Log Page Extended Data: Supported 00:12:25.293 Telemetry Log Pages: Not Supported 00:12:25.293 Persistent Event Log Pages: Not Supported 00:12:25.293 Supported Log Pages Log Page: May Support 00:12:25.293 Commands Supported & Effects Log Page: Not Supported 00:12:25.293 Feature Identifiers & Effects Log Page:May Support 00:12:25.293 NVMe-MI Commands & Effects Log Page: May Support 00:12:25.293 Data Area 4 for Telemetry Log: Not Supported 00:12:25.293 Error Log Page Entries Supported: 128 00:12:25.293 Keep Alive: Supported 00:12:25.293 Keep Alive Granularity: 10000 ms 00:12:25.293 00:12:25.293 NVM Command Set Attributes 00:12:25.293 ========================== 00:12:25.293 Submission Queue Entry Size 00:12:25.293 Max: 64 00:12:25.293 Min: 64 00:12:25.293 Completion Queue Entry Size 00:12:25.293 Max: 16 00:12:25.293 Min: 16 00:12:25.293 Number of Namespaces: 32 00:12:25.293 Compare Command: Supported 00:12:25.293 Write Uncorrectable Command: Not Supported 00:12:25.293 Dataset Management Command: Supported 00:12:25.293 Write Zeroes Command: Supported 00:12:25.294 Set Features Save Field: Not Supported 00:12:25.294 Reservations: Not Supported 00:12:25.294 Timestamp: Not Supported 00:12:25.294 Copy: Supported 00:12:25.294 Volatile Write Cache: Present 00:12:25.294 Atomic Write Unit (Normal): 1 00:12:25.294 Atomic Write Unit (PFail): 1 00:12:25.294 Atomic Compare & Write Unit: 1 00:12:25.294 Fused Compare & Write: Supported 00:12:25.294 Scatter-Gather List 00:12:25.294 SGL Command Set: Supported (Dword aligned) 00:12:25.294 SGL Keyed: Not Supported 00:12:25.294 SGL Bit Bucket Descriptor: Not Supported 00:12:25.294 SGL Metadata Pointer: Not Supported 00:12:25.294 Oversized SGL: Not Supported 00:12:25.294 SGL Metadata Address: Not Supported 00:12:25.294 SGL Offset: Not Supported 00:12:25.294 Transport SGL Data Block: Not Supported 00:12:25.294 Replay Protected Memory Block: Not Supported 00:12:25.294 00:12:25.294 Firmware Slot Information 00:12:25.294 ========================= 00:12:25.294 Active slot: 1 00:12:25.294 Slot 1 Firmware Revision: 25.01 00:12:25.294 00:12:25.294 00:12:25.294 Commands Supported and Effects 00:12:25.294 ============================== 00:12:25.294 Admin Commands 00:12:25.294 -------------- 00:12:25.294 Get Log Page (02h): Supported 00:12:25.294 Identify (06h): Supported 00:12:25.294 Abort (08h): Supported 00:12:25.294 Set Features (09h): Supported 00:12:25.294 Get Features (0Ah): Supported 00:12:25.294 Asynchronous Event Request (0Ch): Supported 00:12:25.294 Keep Alive (18h): Supported 00:12:25.294 I/O Commands 00:12:25.294 ------------ 00:12:25.294 Flush (00h): Supported LBA-Change 00:12:25.294 Write (01h): Supported LBA-Change 00:12:25.294 Read (02h): Supported 00:12:25.294 Compare (05h): Supported 00:12:25.294 Write Zeroes (08h): Supported LBA-Change 00:12:25.294 Dataset Management (09h): Supported LBA-Change 00:12:25.294 Copy (19h): Supported LBA-Change 00:12:25.294 00:12:25.294 Error Log 00:12:25.294 ========= 00:12:25.294 00:12:25.294 Arbitration 00:12:25.294 =========== 00:12:25.294 Arbitration Burst: 1 00:12:25.294 00:12:25.294 Power Management 00:12:25.294 ================ 00:12:25.294 Number of Power States: 1 00:12:25.294 Current Power State: Power State #0 00:12:25.294 Power State #0: 00:12:25.294 Max Power: 0.00 W 00:12:25.294 Non-Operational State: Operational 00:12:25.294 Entry Latency: Not Reported 00:12:25.294 Exit Latency: Not Reported 00:12:25.294 Relative Read Throughput: 0 00:12:25.294 Relative Read Latency: 0 00:12:25.294 Relative Write Throughput: 0 00:12:25.294 Relative Write Latency: 0 00:12:25.294 Idle Power: Not Reported 00:12:25.294 Active Power: Not Reported 00:12:25.294 Non-Operational Permissive Mode: Not Supported 00:12:25.294 00:12:25.294 Health Information 00:12:25.294 ================== 00:12:25.294 Critical Warnings: 00:12:25.294 Available Spare Space: OK 00:12:25.294 Temperature: OK 00:12:25.294 Device Reliability: OK 00:12:25.294 Read Only: No 00:12:25.294 Volatile Memory Backup: OK 00:12:25.294 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:25.294 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:25.294 Available Spare: 0% 00:12:25.294 Available Sp[2024-12-09 18:01:48.289771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:25.294 [2024-12-09 18:01:48.289787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:25.294 [2024-12-09 18:01:48.289851] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:12:25.294 [2024-12-09 18:01:48.289880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.294 [2024-12-09 18:01:48.289891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.294 [2024-12-09 18:01:48.289900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.294 [2024-12-09 18:01:48.289909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.294 [2024-12-09 18:01:48.292559] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:25.294 [2024-12-09 18:01:48.292583] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:25.294 [2024-12-09 18:01:48.293242] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:25.294 [2024-12-09 18:01:48.293330] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:12:25.294 [2024-12-09 18:01:48.293344] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:12:25.294 [2024-12-09 18:01:48.294252] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:25.294 [2024-12-09 18:01:48.294276] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:12:25.294 [2024-12-09 18:01:48.294336] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:25.294 [2024-12-09 18:01:48.296291] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:25.552 are Threshold: 0% 00:12:25.552 Life Percentage Used: 0% 00:12:25.552 Data Units Read: 0 00:12:25.552 Data Units Written: 0 00:12:25.552 Host Read Commands: 0 00:12:25.552 Host Write Commands: 0 00:12:25.552 Controller Busy Time: 0 minutes 00:12:25.552 Power Cycles: 0 00:12:25.552 Power On Hours: 0 hours 00:12:25.552 Unsafe Shutdowns: 0 00:12:25.552 Unrecoverable Media Errors: 0 00:12:25.552 Lifetime Error Log Entries: 0 00:12:25.552 Warning Temperature Time: 0 minutes 00:12:25.552 Critical Temperature Time: 0 minutes 00:12:25.552 00:12:25.552 Number of Queues 00:12:25.552 ================ 00:12:25.552 Number of I/O Submission Queues: 127 00:12:25.552 Number of I/O Completion Queues: 127 00:12:25.552 00:12:25.552 Active Namespaces 00:12:25.552 ================= 00:12:25.552 Namespace ID:1 00:12:25.552 Error Recovery Timeout: Unlimited 00:12:25.552 Command Set Identifier: NVM (00h) 00:12:25.552 Deallocate: Supported 00:12:25.552 Deallocated/Unwritten Error: Not Supported 00:12:25.552 Deallocated Read Value: Unknown 00:12:25.552 Deallocate in Write Zeroes: Not Supported 00:12:25.552 Deallocated Guard Field: 0xFFFF 00:12:25.552 Flush: Supported 00:12:25.552 Reservation: Supported 00:12:25.552 Namespace Sharing Capabilities: Multiple Controllers 00:12:25.552 Size (in LBAs): 131072 (0GiB) 00:12:25.552 Capacity (in LBAs): 131072 (0GiB) 00:12:25.552 Utilization (in LBAs): 131072 (0GiB) 00:12:25.552 NGUID: 46D5D539A3F344ABBFB06AC5D2229C76 00:12:25.552 UUID: 46d5d539-a3f3-44ab-bfb0-6ac5d2229c76 00:12:25.552 Thin Provisioning: Not Supported 00:12:25.552 Per-NS Atomic Units: Yes 00:12:25.552 Atomic Boundary Size (Normal): 0 00:12:25.552 Atomic Boundary Size (PFail): 0 00:12:25.552 Atomic Boundary Offset: 0 00:12:25.552 Maximum Single Source Range Length: 65535 00:12:25.552 Maximum Copy Length: 65535 00:12:25.552 Maximum Source Range Count: 1 00:12:25.552 NGUID/EUI64 Never Reused: No 00:12:25.552 Namespace Write Protected: No 00:12:25.552 Number of LBA Formats: 1 00:12:25.552 Current LBA Format: LBA Format #00 00:12:25.552 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:25.552 00:12:25.552 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:25.552 [2024-12-09 18:01:48.539439] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:30.816 Initializing NVMe Controllers 00:12:30.816 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:30.816 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:30.816 Initialization complete. Launching workers. 00:12:30.816 ======================================================== 00:12:30.816 Latency(us) 00:12:30.816 Device Information : IOPS MiB/s Average min max 00:12:30.816 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 31644.26 123.61 4044.46 1196.22 7630.44 00:12:30.816 ======================================================== 00:12:30.816 Total : 31644.26 123.61 4044.46 1196.22 7630.44 00:12:30.816 00:12:30.816 [2024-12-09 18:01:53.558820] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:30.816 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:30.816 [2024-12-09 18:01:53.814043] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:36.080 Initializing NVMe Controllers 00:12:36.080 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:36.080 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:36.080 Initialization complete. Launching workers. 00:12:36.080 ======================================================== 00:12:36.080 Latency(us) 00:12:36.080 Device Information : IOPS MiB/s Average min max 00:12:36.080 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15948.80 62.30 8032.51 7500.88 15963.25 00:12:36.080 ======================================================== 00:12:36.080 Total : 15948.80 62.30 8032.51 7500.88 15963.25 00:12:36.080 00:12:36.080 [2024-12-09 18:01:58.849730] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:36.080 18:01:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:36.080 [2024-12-09 18:01:59.083846] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:41.351 [2024-12-09 18:02:04.168005] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:41.351 Initializing NVMe Controllers 00:12:41.351 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:41.351 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:41.351 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:41.351 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:41.351 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:41.351 Initialization complete. Launching workers. 00:12:41.351 Starting thread on core 2 00:12:41.351 Starting thread on core 3 00:12:41.351 Starting thread on core 1 00:12:41.351 18:02:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:41.610 [2024-12-09 18:02:04.503035] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:44.901 [2024-12-09 18:02:07.573462] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:44.901 Initializing NVMe Controllers 00:12:44.901 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:44.901 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:44.901 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:44.901 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:44.901 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:44.901 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:44.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:44.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:44.901 Initialization complete. Launching workers. 00:12:44.901 Starting thread on core 1 with urgent priority queue 00:12:44.901 Starting thread on core 2 with urgent priority queue 00:12:44.901 Starting thread on core 3 with urgent priority queue 00:12:44.901 Starting thread on core 0 with urgent priority queue 00:12:44.901 SPDK bdev Controller (SPDK1 ) core 0: 5387.67 IO/s 18.56 secs/100000 ios 00:12:44.901 SPDK bdev Controller (SPDK1 ) core 1: 4924.00 IO/s 20.31 secs/100000 ios 00:12:44.901 SPDK bdev Controller (SPDK1 ) core 2: 5147.33 IO/s 19.43 secs/100000 ios 00:12:44.901 SPDK bdev Controller (SPDK1 ) core 3: 5510.33 IO/s 18.15 secs/100000 ios 00:12:44.901 ======================================================== 00:12:44.901 00:12:44.901 18:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:44.901 [2024-12-09 18:02:07.893352] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:44.901 Initializing NVMe Controllers 00:12:44.901 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:44.901 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:44.901 Namespace ID: 1 size: 0GB 00:12:44.901 Initialization complete. 00:12:44.901 INFO: using host memory buffer for IO 00:12:44.901 Hello world! 00:12:44.901 [2024-12-09 18:02:07.926987] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:45.159 18:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:45.417 [2024-12-09 18:02:08.232013] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:46.351 Initializing NVMe Controllers 00:12:46.351 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:46.351 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:46.351 Initialization complete. Launching workers. 00:12:46.351 submit (in ns) avg, min, max = 8139.1, 3524.4, 4017128.9 00:12:46.351 complete (in ns) avg, min, max = 28889.5, 2066.7, 6012107.8 00:12:46.351 00:12:46.351 Submit histogram 00:12:46.351 ================ 00:12:46.351 Range in us Cumulative Count 00:12:46.351 3.508 - 3.532: 0.0650% ( 8) 00:12:46.351 3.532 - 3.556: 0.3249% ( 32) 00:12:46.351 3.556 - 3.579: 0.9261% ( 74) 00:12:46.351 3.579 - 3.603: 2.7295% ( 222) 00:12:46.351 3.603 - 3.627: 6.7262% ( 492) 00:12:46.351 3.627 - 3.650: 13.5581% ( 841) 00:12:46.351 3.650 - 3.674: 21.7953% ( 1014) 00:12:46.351 3.674 - 3.698: 30.3981% ( 1059) 00:12:46.351 3.698 - 3.721: 38.8058% ( 1035) 00:12:46.351 3.721 - 3.745: 46.2063% ( 911) 00:12:46.351 3.745 - 3.769: 51.3566% ( 634) 00:12:46.351 3.769 - 3.793: 56.0845% ( 582) 00:12:46.351 3.793 - 3.816: 59.9106% ( 471) 00:12:46.351 3.816 - 3.840: 64.3217% ( 543) 00:12:46.351 3.840 - 3.864: 67.8716% ( 437) 00:12:46.351 3.864 - 3.887: 71.6897% ( 470) 00:12:46.351 3.887 - 3.911: 75.8976% ( 518) 00:12:46.351 3.911 - 3.935: 79.9756% ( 502) 00:12:46.351 3.935 - 3.959: 83.0382% ( 377) 00:12:46.351 3.959 - 3.982: 85.2640% ( 274) 00:12:46.351 3.982 - 4.006: 87.2136% ( 240) 00:12:46.351 4.006 - 4.030: 88.8383% ( 200) 00:12:46.351 4.030 - 4.053: 90.2843% ( 178) 00:12:46.351 4.053 - 4.077: 91.4460% ( 143) 00:12:46.351 4.077 - 4.101: 92.4533% ( 124) 00:12:46.351 4.101 - 4.124: 93.3631% ( 112) 00:12:46.351 4.124 - 4.148: 94.1917% ( 102) 00:12:46.351 4.148 - 4.172: 94.9634% ( 95) 00:12:46.351 4.172 - 4.196: 95.3046% ( 42) 00:12:46.351 4.196 - 4.219: 95.6296% ( 40) 00:12:46.351 4.219 - 4.243: 95.7839% ( 19) 00:12:46.351 4.243 - 4.267: 96.0114% ( 28) 00:12:46.351 4.267 - 4.290: 96.1901% ( 22) 00:12:46.351 4.290 - 4.314: 96.3038% ( 14) 00:12:46.351 4.314 - 4.338: 96.5069% ( 25) 00:12:46.351 4.338 - 4.361: 96.7181% ( 26) 00:12:46.351 4.361 - 4.385: 96.7912% ( 9) 00:12:46.351 4.385 - 4.409: 96.8237% ( 4) 00:12:46.351 4.409 - 4.433: 96.8806% ( 7) 00:12:46.351 4.433 - 4.456: 96.9131% ( 4) 00:12:46.351 4.456 - 4.480: 96.9618% ( 6) 00:12:46.351 4.480 - 4.504: 96.9781% ( 2) 00:12:46.351 4.504 - 4.527: 97.0024% ( 3) 00:12:46.351 4.527 - 4.551: 97.0349% ( 4) 00:12:46.351 4.551 - 4.575: 97.0512% ( 2) 00:12:46.351 4.575 - 4.599: 97.0593% ( 1) 00:12:46.351 4.599 - 4.622: 97.0674% ( 1) 00:12:46.351 4.622 - 4.646: 97.0837% ( 2) 00:12:46.351 4.646 - 4.670: 97.0918% ( 1) 00:12:46.351 4.670 - 4.693: 97.1649% ( 9) 00:12:46.351 4.693 - 4.717: 97.2299% ( 8) 00:12:46.351 4.717 - 4.741: 97.2786% ( 6) 00:12:46.351 4.741 - 4.764: 97.3030% ( 3) 00:12:46.351 4.764 - 4.788: 97.3842% ( 10) 00:12:46.351 4.788 - 4.812: 97.4655% ( 10) 00:12:46.351 4.812 - 4.836: 97.5142% ( 6) 00:12:46.351 4.836 - 4.859: 97.5548% ( 5) 00:12:46.351 4.859 - 4.883: 97.6036% ( 6) 00:12:46.351 4.883 - 4.907: 97.6442% ( 5) 00:12:46.351 4.907 - 4.930: 97.7092% ( 8) 00:12:46.351 4.930 - 4.954: 97.7660% ( 7) 00:12:46.351 4.954 - 4.978: 97.7904% ( 3) 00:12:46.351 4.978 - 5.001: 97.8310% ( 5) 00:12:46.351 5.001 - 5.025: 97.8798% ( 6) 00:12:46.351 5.025 - 5.049: 97.9041% ( 3) 00:12:46.351 5.049 - 5.073: 97.9448% ( 5) 00:12:46.351 5.073 - 5.096: 97.9773% ( 4) 00:12:46.351 5.096 - 5.120: 97.9935% ( 2) 00:12:46.351 5.120 - 5.144: 98.0016% ( 1) 00:12:46.351 5.144 - 5.167: 98.0097% ( 1) 00:12:46.351 5.167 - 5.191: 98.0179% ( 1) 00:12:46.351 5.191 - 5.215: 98.0504% ( 4) 00:12:46.351 5.215 - 5.239: 98.0585% ( 1) 00:12:46.351 5.239 - 5.262: 98.0991% ( 5) 00:12:46.351 5.286 - 5.310: 98.1072% ( 1) 00:12:46.351 5.310 - 5.333: 98.1154% ( 1) 00:12:46.351 5.452 - 5.476: 98.1235% ( 1) 00:12:46.351 5.476 - 5.499: 98.1316% ( 1) 00:12:46.351 5.547 - 5.570: 98.1397% ( 1) 00:12:46.351 5.618 - 5.641: 98.1478% ( 1) 00:12:46.351 5.713 - 5.736: 98.1560% ( 1) 00:12:46.351 5.831 - 5.855: 98.1641% ( 1) 00:12:46.351 5.879 - 5.902: 98.1722% ( 1) 00:12:46.351 5.973 - 5.997: 98.1803% ( 1) 00:12:46.351 6.116 - 6.163: 98.1885% ( 1) 00:12:46.351 6.210 - 6.258: 98.1966% ( 1) 00:12:46.351 6.400 - 6.447: 98.2047% ( 1) 00:12:46.351 6.495 - 6.542: 98.2128% ( 1) 00:12:46.351 6.542 - 6.590: 98.2210% ( 1) 00:12:46.351 6.590 - 6.637: 98.2291% ( 1) 00:12:46.351 6.637 - 6.684: 98.2372% ( 1) 00:12:46.351 6.779 - 6.827: 98.2453% ( 1) 00:12:46.351 6.874 - 6.921: 98.2535% ( 1) 00:12:46.351 6.969 - 7.016: 98.2616% ( 1) 00:12:46.351 7.064 - 7.111: 98.2697% ( 1) 00:12:46.351 7.111 - 7.159: 98.2859% ( 2) 00:12:46.351 7.301 - 7.348: 98.2941% ( 1) 00:12:46.351 7.348 - 7.396: 98.3022% ( 1) 00:12:46.351 7.396 - 7.443: 98.3103% ( 1) 00:12:46.351 7.633 - 7.680: 98.3184% ( 1) 00:12:46.351 7.870 - 7.917: 98.3266% ( 1) 00:12:46.351 7.917 - 7.964: 98.3428% ( 2) 00:12:46.351 7.964 - 8.012: 98.3509% ( 1) 00:12:46.351 8.012 - 8.059: 98.3591% ( 1) 00:12:46.351 8.059 - 8.107: 98.3672% ( 1) 00:12:46.351 8.107 - 8.154: 98.3753% ( 1) 00:12:46.351 8.154 - 8.201: 98.3916% ( 2) 00:12:46.351 8.201 - 8.249: 98.3997% ( 1) 00:12:46.351 8.296 - 8.344: 98.4078% ( 1) 00:12:46.351 8.439 - 8.486: 98.4159% ( 1) 00:12:46.351 8.486 - 8.533: 98.4240% ( 1) 00:12:46.351 8.533 - 8.581: 98.4322% ( 1) 00:12:46.351 8.581 - 8.628: 98.4484% ( 2) 00:12:46.351 8.628 - 8.676: 98.4728% ( 3) 00:12:46.351 8.676 - 8.723: 98.4809% ( 1) 00:12:46.351 8.723 - 8.770: 98.4890% ( 1) 00:12:46.351 8.770 - 8.818: 98.4972% ( 1) 00:12:46.351 8.818 - 8.865: 98.5053% ( 1) 00:12:46.351 9.150 - 9.197: 98.5134% ( 1) 00:12:46.351 9.197 - 9.244: 98.5297% ( 2) 00:12:46.351 9.244 - 9.292: 98.5459% ( 2) 00:12:46.351 9.339 - 9.387: 98.5621% ( 2) 00:12:46.351 9.387 - 9.434: 98.5703% ( 1) 00:12:46.351 9.434 - 9.481: 98.5784% ( 1) 00:12:46.351 9.624 - 9.671: 98.5865% ( 1) 00:12:46.351 9.861 - 9.908: 98.5946% ( 1) 00:12:46.351 9.956 - 10.003: 98.6028% ( 1) 00:12:46.351 10.050 - 10.098: 98.6109% ( 1) 00:12:46.351 10.193 - 10.240: 98.6190% ( 1) 00:12:46.351 10.287 - 10.335: 98.6353% ( 2) 00:12:46.351 10.382 - 10.430: 98.6434% ( 1) 00:12:46.351 10.572 - 10.619: 98.6596% ( 2) 00:12:46.351 10.619 - 10.667: 98.6759% ( 2) 00:12:46.351 10.667 - 10.714: 98.7002% ( 3) 00:12:46.351 10.951 - 10.999: 98.7084% ( 1) 00:12:46.351 10.999 - 11.046: 98.7165% ( 1) 00:12:46.351 11.093 - 11.141: 98.7246% ( 1) 00:12:46.351 11.283 - 11.330: 98.7327% ( 1) 00:12:46.351 11.567 - 11.615: 98.7571% ( 3) 00:12:46.351 11.662 - 11.710: 98.7734% ( 2) 00:12:46.351 11.757 - 11.804: 98.7815% ( 1) 00:12:46.351 11.899 - 11.947: 98.7896% ( 1) 00:12:46.351 11.994 - 12.041: 98.7977% ( 1) 00:12:46.351 12.136 - 12.231: 98.8058% ( 1) 00:12:46.352 12.326 - 12.421: 98.8140% ( 1) 00:12:46.352 12.800 - 12.895: 98.8221% ( 1) 00:12:46.352 12.895 - 12.990: 98.8383% ( 2) 00:12:46.352 12.990 - 13.084: 98.8465% ( 1) 00:12:46.352 13.084 - 13.179: 98.8546% ( 1) 00:12:46.352 13.179 - 13.274: 98.8708% ( 2) 00:12:46.352 13.274 - 13.369: 98.8790% ( 1) 00:12:46.352 13.369 - 13.464: 98.8871% ( 1) 00:12:46.352 13.559 - 13.653: 98.8952% ( 1) 00:12:46.352 13.748 - 13.843: 98.9033% ( 1) 00:12:46.352 13.938 - 14.033: 98.9277% ( 3) 00:12:46.352 14.412 - 14.507: 98.9358% ( 1) 00:12:46.352 14.696 - 14.791: 98.9521% ( 2) 00:12:46.352 14.981 - 15.076: 98.9602% ( 1) 00:12:46.352 17.161 - 17.256: 98.9764% ( 2) 00:12:46.352 17.351 - 17.446: 99.0171% ( 5) 00:12:46.352 17.446 - 17.541: 99.0739% ( 7) 00:12:46.352 17.541 - 17.636: 99.1227% ( 6) 00:12:46.352 17.636 - 17.730: 99.1389% ( 2) 00:12:46.352 17.730 - 17.825: 99.1877% ( 6) 00:12:46.352 17.825 - 17.920: 99.2445% ( 7) 00:12:46.352 17.920 - 18.015: 99.2933% ( 6) 00:12:46.352 18.015 - 18.110: 99.3420% ( 6) 00:12:46.352 18.110 - 18.204: 99.4557% ( 14) 00:12:46.352 18.204 - 18.299: 99.5288% ( 9) 00:12:46.352 18.299 - 18.394: 99.5776% ( 6) 00:12:46.352 18.394 - 18.489: 99.6507% ( 9) 00:12:46.352 18.489 - 18.584: 99.6913% ( 5) 00:12:46.352 18.584 - 18.679: 99.7238% ( 4) 00:12:46.352 18.679 - 18.773: 99.7725% ( 6) 00:12:46.352 18.773 - 18.868: 99.8050% ( 4) 00:12:46.352 18.868 - 18.963: 99.8132% ( 1) 00:12:46.352 18.963 - 19.058: 99.8294% ( 2) 00:12:46.352 19.153 - 19.247: 99.8375% ( 1) 00:12:46.352 19.342 - 19.437: 99.8538% ( 2) 00:12:46.352 19.532 - 19.627: 99.8619% ( 1) 00:12:46.352 19.721 - 19.816: 99.8700% ( 1) 00:12:46.352 20.101 - 20.196: 99.8781% ( 1) 00:12:46.352 23.324 - 23.419: 99.8863% ( 1) 00:12:46.352 24.273 - 24.462: 99.8944% ( 1) 00:12:46.352 3009.801 - 3021.938: 99.9025% ( 1) 00:12:46.352 3980.705 - 4004.978: 99.9756% ( 9) 00:12:46.352 4004.978 - 4029.250: 100.0000% ( 3) 00:12:46.352 00:12:46.352 Complete histogram 00:12:46.352 ================== 00:12:46.352 Range in us Cumulative Count 00:12:46.352 2.062 - 2.074: 1.2348% ( 152) 00:12:46.352 2.074 - 2.086: 27.4086% ( 3222) 00:12:46.352 2.086 - 2.098: 35.6864% ( 1019) 00:12:46.352 2.098 - 2.110: 40.4224% ( 583) 00:12:46.352 2.110 - 2.121: 52.5264% ( 1490) 00:12:46.352 2.121 - 2.133: 54.5491% ( 249) 00:12:46.352 2.133 - 2.145: 59.4395% ( 602) 00:12:46.352 2.145 - 2.157: 70.8530% ( 1405) 00:12:46.352 2.157 - 2.169: 72.7457% ( 233) 00:12:46.352 2.169 - 2.181: 76.0195% ( 403) 00:12:46.352 2.181 - 2.193: 79.8538% ( 472) 00:12:46.352 2.193 - 2.204: 80.5443% ( 85) 00:12:46.352 2.204 - 2.216: 82.0634% ( 187) 00:12:46.352 2.216 - 2.228: 86.1332% ( 501) 00:12:46.352 2.228 - 2.240: 87.8798% ( 215) 00:12:46.352 2.240 - 2.252: 90.4549% ( 317) 00:12:46.352 2.252 - 2.264: 92.2015% ( 215) 00:12:46.352 2.264 - 2.276: 92.5264% ( 40) 00:12:46.352 2.276 - 2.287: 92.8920% ( 45) 00:12:46.352 2.287 - 2.299: 93.1763% ( 35) 00:12:46.352 2.299 - 2.311: 93.6231% ( 55) 00:12:46.352 2.311 - 2.323: 94.4842% ( 106) 00:12:46.352 2.323 - 2.335: 94.6954% ( 26) 00:12:46.352 2.335 - 2.347: 94.7685% ( 9) 00:12:46.352 2.347 - 2.359: 94.8091% ( 5) 00:12:46.352 2.359 - 2.370: 94.9147% ( 13) 00:12:46.352 2.370 - 2.382: 95.0041% ( 11) 00:12:46.352 2.382 - 2.394: 95.2884% ( 35) 00:12:46.352 2.394 - 2.406: 95.6621% ( 46) 00:12:46.352 2.406 - 2.418: 95.8245% ( 20) 00:12:46.352 2.418 - 2.430: 96.0032% ( 22) 00:12:46.352 2.430 - 2.441: 96.1901% ( 23) 00:12:46.352 2.441 - 2.453: 96.3688% ( 22) 00:12:46.352 2.453 - 2.465: 96.5069% ( 17) 00:12:46.352 2.465 - 2.477: 96.7506% ( 30) 00:12:46.352 2.477 - 2.489: 96.8806% ( 16) 00:12:46.352 2.489 - 2.501: 97.1487% ( 33) 00:12:46.352 2.501 - 2.513: 97.3193% ( 21) 00:12:46.352 2.513 - 2.524: 97.4655% ( 18) 00:12:46.352 2.524 - 2.536: 97.6117% ( 18) 00:12:46.352 2.536 - 2.548: 97.7498% ( 17) 00:12:46.352 2.548 - 2.560: 97.8473% ( 12) 00:12:46.352 2.560 - 2.572: 97.8798% ( 4) 00:12:46.352 2.572 - 2.584: 97.9366% ( 7) 00:12:46.352 2.584 - 2.596: 98.0097% ( 9) 00:12:46.352 2.596 - 2.607: 98.0829% ( 9) 00:12:46.352 2.607 - 2.619: 98.0910% ( 1) 00:12:46.352 2.619 - 2.631: 98.1235% ( 4) 00:12:46.352 2.631 - 2.643: 98.1478% ( 3) 00:12:46.352 2.643 - 2.655: 98.1722% ( 3) 00:12:46.352 2.655 - 2.667: 98.1803% ( 1) 00:12:46.352 2.679 - 2.690: 98.2210% ( 5) 00:12:46.352 2.690 - 2.702: 98.2372% ( 2) 00:12:46.352 2.702 - 2.714: 98.2535% ( 2) 00:12:46.352 2.714 - 2.726: 98.2697% ( 2) 00:12:46.352 2.738 - 2.750: 98.2859% ( 2) 00:12:46.352 2.750 - 2.761: 98.3022% ( 2) 00:12:46.352 2.773 - 2.785: 98.3103% ( 1) 00:12:46.352 2.785 - 2.797: 98.3184% ( 1) 00:12:46.352 2.797 - 2.809: 98.3347% ( 2) 00:12:46.352 2.821 - 2.833: 98.3428% ( 1) 00:12:46.352 2.833 - 2.844: 98.3509% ( 1) 00:12:46.352 2.880 - 2.892: 98.3591% ( 1) 00:12:46.352 2.916 - 2.927: 98.3672% ( 1) 00:12:46.352 2.939 - 2.951: 9[2024-12-09 18:02:09.254212] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:46.352 8.3753% ( 1) 00:12:46.352 2.975 - 2.987: 98.3834% ( 1) 00:12:46.352 3.034 - 3.058: 98.3916% ( 1) 00:12:46.352 3.058 - 3.081: 98.3997% ( 1) 00:12:46.352 3.176 - 3.200: 98.4078% ( 1) 00:12:46.352 3.247 - 3.271: 98.4159% ( 1) 00:12:46.352 3.437 - 3.461: 98.4322% ( 2) 00:12:46.352 3.461 - 3.484: 98.4403% ( 1) 00:12:46.352 3.484 - 3.508: 98.4484% ( 1) 00:12:46.352 3.532 - 3.556: 98.4565% ( 1) 00:12:46.352 3.556 - 3.579: 98.4809% ( 3) 00:12:46.352 3.579 - 3.603: 98.4972% ( 2) 00:12:46.352 3.603 - 3.627: 98.5053% ( 1) 00:12:46.352 3.627 - 3.650: 98.5134% ( 1) 00:12:46.352 3.674 - 3.698: 98.5297% ( 2) 00:12:46.352 3.698 - 3.721: 98.5540% ( 3) 00:12:46.352 3.745 - 3.769: 98.5621% ( 1) 00:12:46.352 3.816 - 3.840: 98.5703% ( 1) 00:12:46.352 3.840 - 3.864: 98.5784% ( 1) 00:12:46.352 3.935 - 3.959: 98.5865% ( 1) 00:12:46.352 3.982 - 4.006: 98.6028% ( 2) 00:12:46.352 4.053 - 4.077: 98.6109% ( 1) 00:12:46.352 4.385 - 4.409: 98.6190% ( 1) 00:12:46.352 5.476 - 5.499: 98.6271% ( 1) 00:12:46.352 5.784 - 5.807: 98.6353% ( 1) 00:12:46.352 5.950 - 5.973: 98.6434% ( 1) 00:12:46.352 6.163 - 6.210: 98.6515% ( 1) 00:12:46.352 6.447 - 6.495: 98.6596% ( 1) 00:12:46.352 7.301 - 7.348: 98.6677% ( 1) 00:12:46.352 7.348 - 7.396: 98.6759% ( 1) 00:12:46.352 7.680 - 7.727: 98.6840% ( 1) 00:12:46.352 7.964 - 8.012: 98.6921% ( 1) 00:12:46.352 8.249 - 8.296: 98.7002% ( 1) 00:12:46.352 8.533 - 8.581: 98.7084% ( 1) 00:12:46.352 8.865 - 8.913: 98.7165% ( 1) 00:12:46.352 9.055 - 9.102: 98.7246% ( 1) 00:12:46.352 10.145 - 10.193: 98.7327% ( 1) 00:12:46.352 10.904 - 10.951: 98.7409% ( 1) 00:12:46.352 15.360 - 15.455: 98.7490% ( 1) 00:12:46.352 15.644 - 15.739: 98.7734% ( 3) 00:12:46.352 15.739 - 15.834: 98.7977% ( 3) 00:12:46.352 15.834 - 15.929: 98.8221% ( 3) 00:12:46.352 15.929 - 16.024: 98.8465% ( 3) 00:12:46.352 16.024 - 16.119: 98.8871% ( 5) 00:12:46.352 16.119 - 16.213: 98.9033% ( 2) 00:12:46.352 16.213 - 16.308: 98.9521% ( 6) 00:12:46.352 16.308 - 16.403: 98.9683% ( 2) 00:12:46.352 16.403 - 16.498: 99.0171% ( 6) 00:12:46.352 16.498 - 16.593: 99.0658% ( 6) 00:12:46.352 16.593 - 16.687: 99.1227% ( 7) 00:12:46.352 16.687 - 16.782: 99.1633% ( 5) 00:12:46.352 16.782 - 16.877: 99.1877% ( 3) 00:12:46.352 16.877 - 16.972: 99.2120% ( 3) 00:12:46.352 17.067 - 17.161: 99.2283% ( 2) 00:12:46.352 17.161 - 17.256: 99.2608% ( 4) 00:12:46.352 17.351 - 17.446: 99.2689% ( 1) 00:12:46.352 17.446 - 17.541: 99.2770% ( 1) 00:12:46.352 17.541 - 17.636: 99.2933% ( 2) 00:12:46.352 17.636 - 17.730: 99.3014% ( 1) 00:12:46.352 17.825 - 17.920: 99.3176% ( 2) 00:12:46.352 18.679 - 18.773: 99.3258% ( 1) 00:12:46.352 25.410 - 25.600: 99.3339% ( 1) 00:12:46.352 2026.761 - 2038.898: 99.3420% ( 1) 00:12:46.352 3883.615 - 3907.887: 99.3501% ( 1) 00:12:46.352 3980.705 - 4004.978: 99.8457% ( 61) 00:12:46.352 4004.978 - 4029.250: 99.9919% ( 18) 00:12:46.352 5995.330 - 6019.603: 100.0000% ( 1) 00:12:46.352 00:12:46.352 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:46.352 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:46.352 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:46.352 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:46.352 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:46.611 [ 00:12:46.611 { 00:12:46.611 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:46.611 "subtype": "Discovery", 00:12:46.611 "listen_addresses": [], 00:12:46.611 "allow_any_host": true, 00:12:46.611 "hosts": [] 00:12:46.611 }, 00:12:46.611 { 00:12:46.611 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:46.611 "subtype": "NVMe", 00:12:46.611 "listen_addresses": [ 00:12:46.611 { 00:12:46.611 "trtype": "VFIOUSER", 00:12:46.611 "adrfam": "IPv4", 00:12:46.611 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:46.611 "trsvcid": "0" 00:12:46.611 } 00:12:46.611 ], 00:12:46.611 "allow_any_host": true, 00:12:46.611 "hosts": [], 00:12:46.611 "serial_number": "SPDK1", 00:12:46.611 "model_number": "SPDK bdev Controller", 00:12:46.611 "max_namespaces": 32, 00:12:46.611 "min_cntlid": 1, 00:12:46.611 "max_cntlid": 65519, 00:12:46.611 "namespaces": [ 00:12:46.611 { 00:12:46.611 "nsid": 1, 00:12:46.611 "bdev_name": "Malloc1", 00:12:46.611 "name": "Malloc1", 00:12:46.611 "nguid": "46D5D539A3F344ABBFB06AC5D2229C76", 00:12:46.611 "uuid": "46d5d539-a3f3-44ab-bfb0-6ac5d2229c76" 00:12:46.611 } 00:12:46.611 ] 00:12:46.611 }, 00:12:46.611 { 00:12:46.611 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:46.611 "subtype": "NVMe", 00:12:46.611 "listen_addresses": [ 00:12:46.611 { 00:12:46.611 "trtype": "VFIOUSER", 00:12:46.611 "adrfam": "IPv4", 00:12:46.611 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:46.611 "trsvcid": "0" 00:12:46.611 } 00:12:46.611 ], 00:12:46.611 "allow_any_host": true, 00:12:46.611 "hosts": [], 00:12:46.611 "serial_number": "SPDK2", 00:12:46.611 "model_number": "SPDK bdev Controller", 00:12:46.611 "max_namespaces": 32, 00:12:46.611 "min_cntlid": 1, 00:12:46.611 "max_cntlid": 65519, 00:12:46.611 "namespaces": [ 00:12:46.611 { 00:12:46.611 "nsid": 1, 00:12:46.611 "bdev_name": "Malloc2", 00:12:46.611 "name": "Malloc2", 00:12:46.611 "nguid": "D09A7CF0299543A08D9F1A059D93AD76", 00:12:46.611 "uuid": "d09a7cf0-2995-43a0-8d9f-1a059d93ad76" 00:12:46.611 } 00:12:46.611 ] 00:12:46.611 } 00:12:46.611 ] 00:12:46.611 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:46.611 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1445565 00:12:46.611 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:46.611 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:46.611 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:12:46.611 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:46.611 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:46.611 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:12:46.611 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:46.611 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:46.869 [2024-12-09 18:02:09.750006] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:46.869 Malloc3 00:12:46.869 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:47.127 [2024-12-09 18:02:10.139870] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:47.127 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:47.385 Asynchronous Event Request test 00:12:47.385 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:47.385 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:47.385 Registering asynchronous event callbacks... 00:12:47.385 Starting namespace attribute notice tests for all controllers... 00:12:47.385 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:47.385 aer_cb - Changed Namespace 00:12:47.385 Cleaning up... 00:12:47.385 [ 00:12:47.385 { 00:12:47.385 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:47.385 "subtype": "Discovery", 00:12:47.385 "listen_addresses": [], 00:12:47.385 "allow_any_host": true, 00:12:47.385 "hosts": [] 00:12:47.385 }, 00:12:47.385 { 00:12:47.385 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:47.385 "subtype": "NVMe", 00:12:47.385 "listen_addresses": [ 00:12:47.385 { 00:12:47.385 "trtype": "VFIOUSER", 00:12:47.385 "adrfam": "IPv4", 00:12:47.385 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:47.385 "trsvcid": "0" 00:12:47.385 } 00:12:47.385 ], 00:12:47.385 "allow_any_host": true, 00:12:47.385 "hosts": [], 00:12:47.385 "serial_number": "SPDK1", 00:12:47.385 "model_number": "SPDK bdev Controller", 00:12:47.385 "max_namespaces": 32, 00:12:47.385 "min_cntlid": 1, 00:12:47.385 "max_cntlid": 65519, 00:12:47.385 "namespaces": [ 00:12:47.385 { 00:12:47.385 "nsid": 1, 00:12:47.385 "bdev_name": "Malloc1", 00:12:47.385 "name": "Malloc1", 00:12:47.385 "nguid": "46D5D539A3F344ABBFB06AC5D2229C76", 00:12:47.385 "uuid": "46d5d539-a3f3-44ab-bfb0-6ac5d2229c76" 00:12:47.385 }, 00:12:47.385 { 00:12:47.385 "nsid": 2, 00:12:47.385 "bdev_name": "Malloc3", 00:12:47.385 "name": "Malloc3", 00:12:47.385 "nguid": "EA09C69C3C8C4CCC8DE82DF570283721", 00:12:47.385 "uuid": "ea09c69c-3c8c-4ccc-8de8-2df570283721" 00:12:47.385 } 00:12:47.385 ] 00:12:47.385 }, 00:12:47.385 { 00:12:47.385 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:47.385 "subtype": "NVMe", 00:12:47.385 "listen_addresses": [ 00:12:47.385 { 00:12:47.385 "trtype": "VFIOUSER", 00:12:47.385 "adrfam": "IPv4", 00:12:47.385 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:47.385 "trsvcid": "0" 00:12:47.385 } 00:12:47.385 ], 00:12:47.385 "allow_any_host": true, 00:12:47.385 "hosts": [], 00:12:47.385 "serial_number": "SPDK2", 00:12:47.385 "model_number": "SPDK bdev Controller", 00:12:47.385 "max_namespaces": 32, 00:12:47.385 "min_cntlid": 1, 00:12:47.385 "max_cntlid": 65519, 00:12:47.385 "namespaces": [ 00:12:47.385 { 00:12:47.385 "nsid": 1, 00:12:47.385 "bdev_name": "Malloc2", 00:12:47.385 "name": "Malloc2", 00:12:47.385 "nguid": "D09A7CF0299543A08D9F1A059D93AD76", 00:12:47.385 "uuid": "d09a7cf0-2995-43a0-8d9f-1a059d93ad76" 00:12:47.385 } 00:12:47.385 ] 00:12:47.385 } 00:12:47.385 ] 00:12:47.645 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1445565 00:12:47.645 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:47.645 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:47.645 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:47.645 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:47.645 [2024-12-09 18:02:10.463150] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:12:47.645 [2024-12-09 18:02:10.463205] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1445700 ] 00:12:47.645 [2024-12-09 18:02:10.515361] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:47.645 [2024-12-09 18:02:10.517719] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:47.645 [2024-12-09 18:02:10.517754] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f88bd035000 00:12:47.645 [2024-12-09 18:02:10.518722] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:47.645 [2024-12-09 18:02:10.519732] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:47.645 [2024-12-09 18:02:10.520732] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:47.645 [2024-12-09 18:02:10.521734] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:47.645 [2024-12-09 18:02:10.526555] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:47.645 [2024-12-09 18:02:10.526785] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:47.645 [2024-12-09 18:02:10.527799] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:47.645 [2024-12-09 18:02:10.528799] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:47.645 [2024-12-09 18:02:10.529811] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:47.645 [2024-12-09 18:02:10.529849] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f88bd02a000 00:12:47.645 [2024-12-09 18:02:10.531059] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:47.645 [2024-12-09 18:02:10.546227] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:47.645 [2024-12-09 18:02:10.546267] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:12:47.645 [2024-12-09 18:02:10.548367] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:47.645 [2024-12-09 18:02:10.548421] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:47.645 [2024-12-09 18:02:10.548514] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:12:47.645 [2024-12-09 18:02:10.548563] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:12:47.645 [2024-12-09 18:02:10.548577] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:12:47.645 [2024-12-09 18:02:10.549368] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:47.645 [2024-12-09 18:02:10.549390] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:12:47.645 [2024-12-09 18:02:10.549403] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:12:47.645 [2024-12-09 18:02:10.550379] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:47.645 [2024-12-09 18:02:10.550402] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:12:47.646 [2024-12-09 18:02:10.550418] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:47.646 [2024-12-09 18:02:10.551382] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:47.646 [2024-12-09 18:02:10.551403] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:47.646 [2024-12-09 18:02:10.552386] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:47.646 [2024-12-09 18:02:10.552407] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:47.646 [2024-12-09 18:02:10.552417] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:47.646 [2024-12-09 18:02:10.552428] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:47.646 [2024-12-09 18:02:10.552539] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:12:47.646 [2024-12-09 18:02:10.552568] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:47.646 [2024-12-09 18:02:10.552578] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:47.646 [2024-12-09 18:02:10.553389] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:47.646 [2024-12-09 18:02:10.554412] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:47.646 [2024-12-09 18:02:10.555413] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:47.646 [2024-12-09 18:02:10.556402] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:47.646 [2024-12-09 18:02:10.556478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:47.646 [2024-12-09 18:02:10.557428] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:47.646 [2024-12-09 18:02:10.557450] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:47.646 [2024-12-09 18:02:10.557460] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:47.646 [2024-12-09 18:02:10.557486] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:12:47.646 [2024-12-09 18:02:10.557505] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:47.646 [2024-12-09 18:02:10.557535] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:47.646 [2024-12-09 18:02:10.557554] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:47.646 [2024-12-09 18:02:10.557562] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:47.646 [2024-12-09 18:02:10.557585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:47.646 [2024-12-09 18:02:10.565563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:47.646 [2024-12-09 18:02:10.565591] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:12:47.646 [2024-12-09 18:02:10.565605] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:12:47.646 [2024-12-09 18:02:10.565614] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:12:47.646 [2024-12-09 18:02:10.565623] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:47.646 [2024-12-09 18:02:10.565631] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:12:47.646 [2024-12-09 18:02:10.565639] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:12:47.646 [2024-12-09 18:02:10.565648] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:12:47.646 [2024-12-09 18:02:10.565662] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:47.646 [2024-12-09 18:02:10.565680] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:47.646 [2024-12-09 18:02:10.573554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:47.646 [2024-12-09 18:02:10.573601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.646 [2024-12-09 18:02:10.573616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.646 [2024-12-09 18:02:10.573628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.646 [2024-12-09 18:02:10.573646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.646 [2024-12-09 18:02:10.573657] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:47.646 [2024-12-09 18:02:10.573675] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:47.646 [2024-12-09 18:02:10.573691] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:47.646 [2024-12-09 18:02:10.581559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:47.646 [2024-12-09 18:02:10.581579] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:12:47.646 [2024-12-09 18:02:10.581589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:47.646 [2024-12-09 18:02:10.581601] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:12:47.646 [2024-12-09 18:02:10.581613] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:47.646 [2024-12-09 18:02:10.581627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:47.646 [2024-12-09 18:02:10.589559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:47.646 [2024-12-09 18:02:10.589652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:12:47.646 [2024-12-09 18:02:10.589671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:47.646 [2024-12-09 18:02:10.589686] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:47.646 [2024-12-09 18:02:10.589695] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:47.646 [2024-12-09 18:02:10.589701] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:47.646 [2024-12-09 18:02:10.589711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:47.646 [2024-12-09 18:02:10.597554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:47.646 [2024-12-09 18:02:10.597580] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:12:47.646 [2024-12-09 18:02:10.597618] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:12:47.646 [2024-12-09 18:02:10.597636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:47.646 [2024-12-09 18:02:10.597650] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:47.646 [2024-12-09 18:02:10.597659] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:47.646 [2024-12-09 18:02:10.597665] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:47.646 [2024-12-09 18:02:10.597675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:47.646 [2024-12-09 18:02:10.605555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:47.646 [2024-12-09 18:02:10.605586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:47.646 [2024-12-09 18:02:10.605603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:47.646 [2024-12-09 18:02:10.605633] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:47.646 [2024-12-09 18:02:10.605642] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:47.646 [2024-12-09 18:02:10.605649] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:47.646 [2024-12-09 18:02:10.605659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:47.646 [2024-12-09 18:02:10.613555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:47.646 [2024-12-09 18:02:10.613578] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:47.646 [2024-12-09 18:02:10.613592] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:47.647 [2024-12-09 18:02:10.613608] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:12:47.647 [2024-12-09 18:02:10.613623] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:47.647 [2024-12-09 18:02:10.613633] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:47.647 [2024-12-09 18:02:10.613642] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:12:47.647 [2024-12-09 18:02:10.613651] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:47.647 [2024-12-09 18:02:10.613659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:12:47.647 [2024-12-09 18:02:10.613668] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:12:47.647 [2024-12-09 18:02:10.613699] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:47.647 [2024-12-09 18:02:10.621555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:47.647 [2024-12-09 18:02:10.621582] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:47.647 [2024-12-09 18:02:10.629572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:47.647 [2024-12-09 18:02:10.629598] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:47.647 [2024-12-09 18:02:10.637558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:47.647 [2024-12-09 18:02:10.637583] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:47.647 [2024-12-09 18:02:10.645559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:47.647 [2024-12-09 18:02:10.645597] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:47.647 [2024-12-09 18:02:10.645610] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:47.647 [2024-12-09 18:02:10.645616] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:47.647 [2024-12-09 18:02:10.645623] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:47.647 [2024-12-09 18:02:10.645629] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:47.647 [2024-12-09 18:02:10.645639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:47.647 [2024-12-09 18:02:10.645651] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:47.647 [2024-12-09 18:02:10.645660] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:47.647 [2024-12-09 18:02:10.645666] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:47.647 [2024-12-09 18:02:10.645676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:47.647 [2024-12-09 18:02:10.645687] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:47.647 [2024-12-09 18:02:10.645696] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:47.647 [2024-12-09 18:02:10.645702] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:47.647 [2024-12-09 18:02:10.645711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:47.647 [2024-12-09 18:02:10.645724] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:47.647 [2024-12-09 18:02:10.645733] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:47.647 [2024-12-09 18:02:10.645739] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:47.647 [2024-12-09 18:02:10.645748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:47.647 [2024-12-09 18:02:10.653558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:47.647 [2024-12-09 18:02:10.653586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:47.647 [2024-12-09 18:02:10.653604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:47.647 [2024-12-09 18:02:10.653617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:47.647 ===================================================== 00:12:47.647 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:47.647 ===================================================== 00:12:47.647 Controller Capabilities/Features 00:12:47.647 ================================ 00:12:47.647 Vendor ID: 4e58 00:12:47.647 Subsystem Vendor ID: 4e58 00:12:47.647 Serial Number: SPDK2 00:12:47.647 Model Number: SPDK bdev Controller 00:12:47.647 Firmware Version: 25.01 00:12:47.647 Recommended Arb Burst: 6 00:12:47.647 IEEE OUI Identifier: 8d 6b 50 00:12:47.647 Multi-path I/O 00:12:47.647 May have multiple subsystem ports: Yes 00:12:47.647 May have multiple controllers: Yes 00:12:47.647 Associated with SR-IOV VF: No 00:12:47.647 Max Data Transfer Size: 131072 00:12:47.647 Max Number of Namespaces: 32 00:12:47.647 Max Number of I/O Queues: 127 00:12:47.647 NVMe Specification Version (VS): 1.3 00:12:47.647 NVMe Specification Version (Identify): 1.3 00:12:47.647 Maximum Queue Entries: 256 00:12:47.647 Contiguous Queues Required: Yes 00:12:47.647 Arbitration Mechanisms Supported 00:12:47.647 Weighted Round Robin: Not Supported 00:12:47.647 Vendor Specific: Not Supported 00:12:47.647 Reset Timeout: 15000 ms 00:12:47.647 Doorbell Stride: 4 bytes 00:12:47.647 NVM Subsystem Reset: Not Supported 00:12:47.647 Command Sets Supported 00:12:47.647 NVM Command Set: Supported 00:12:47.647 Boot Partition: Not Supported 00:12:47.647 Memory Page Size Minimum: 4096 bytes 00:12:47.647 Memory Page Size Maximum: 4096 bytes 00:12:47.647 Persistent Memory Region: Not Supported 00:12:47.647 Optional Asynchronous Events Supported 00:12:47.647 Namespace Attribute Notices: Supported 00:12:47.647 Firmware Activation Notices: Not Supported 00:12:47.647 ANA Change Notices: Not Supported 00:12:47.647 PLE Aggregate Log Change Notices: Not Supported 00:12:47.647 LBA Status Info Alert Notices: Not Supported 00:12:47.647 EGE Aggregate Log Change Notices: Not Supported 00:12:47.647 Normal NVM Subsystem Shutdown event: Not Supported 00:12:47.647 Zone Descriptor Change Notices: Not Supported 00:12:47.647 Discovery Log Change Notices: Not Supported 00:12:47.647 Controller Attributes 00:12:47.647 128-bit Host Identifier: Supported 00:12:47.647 Non-Operational Permissive Mode: Not Supported 00:12:47.647 NVM Sets: Not Supported 00:12:47.647 Read Recovery Levels: Not Supported 00:12:47.647 Endurance Groups: Not Supported 00:12:47.647 Predictable Latency Mode: Not Supported 00:12:47.647 Traffic Based Keep ALive: Not Supported 00:12:47.647 Namespace Granularity: Not Supported 00:12:47.647 SQ Associations: Not Supported 00:12:47.647 UUID List: Not Supported 00:12:47.647 Multi-Domain Subsystem: Not Supported 00:12:47.647 Fixed Capacity Management: Not Supported 00:12:47.647 Variable Capacity Management: Not Supported 00:12:47.647 Delete Endurance Group: Not Supported 00:12:47.647 Delete NVM Set: Not Supported 00:12:47.647 Extended LBA Formats Supported: Not Supported 00:12:47.647 Flexible Data Placement Supported: Not Supported 00:12:47.647 00:12:47.647 Controller Memory Buffer Support 00:12:47.647 ================================ 00:12:47.647 Supported: No 00:12:47.647 00:12:47.647 Persistent Memory Region Support 00:12:47.647 ================================ 00:12:47.647 Supported: No 00:12:47.647 00:12:47.647 Admin Command Set Attributes 00:12:47.647 ============================ 00:12:47.647 Security Send/Receive: Not Supported 00:12:47.647 Format NVM: Not Supported 00:12:47.647 Firmware Activate/Download: Not Supported 00:12:47.647 Namespace Management: Not Supported 00:12:47.647 Device Self-Test: Not Supported 00:12:47.647 Directives: Not Supported 00:12:47.647 NVMe-MI: Not Supported 00:12:47.647 Virtualization Management: Not Supported 00:12:47.647 Doorbell Buffer Config: Not Supported 00:12:47.647 Get LBA Status Capability: Not Supported 00:12:47.647 Command & Feature Lockdown Capability: Not Supported 00:12:47.647 Abort Command Limit: 4 00:12:47.647 Async Event Request Limit: 4 00:12:47.647 Number of Firmware Slots: N/A 00:12:47.647 Firmware Slot 1 Read-Only: N/A 00:12:47.647 Firmware Activation Without Reset: N/A 00:12:47.647 Multiple Update Detection Support: N/A 00:12:47.647 Firmware Update Granularity: No Information Provided 00:12:47.647 Per-Namespace SMART Log: No 00:12:47.647 Asymmetric Namespace Access Log Page: Not Supported 00:12:47.647 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:47.647 Command Effects Log Page: Supported 00:12:47.647 Get Log Page Extended Data: Supported 00:12:47.647 Telemetry Log Pages: Not Supported 00:12:47.647 Persistent Event Log Pages: Not Supported 00:12:47.647 Supported Log Pages Log Page: May Support 00:12:47.647 Commands Supported & Effects Log Page: Not Supported 00:12:47.647 Feature Identifiers & Effects Log Page:May Support 00:12:47.648 NVMe-MI Commands & Effects Log Page: May Support 00:12:47.648 Data Area 4 for Telemetry Log: Not Supported 00:12:47.648 Error Log Page Entries Supported: 128 00:12:47.648 Keep Alive: Supported 00:12:47.648 Keep Alive Granularity: 10000 ms 00:12:47.648 00:12:47.648 NVM Command Set Attributes 00:12:47.648 ========================== 00:12:47.648 Submission Queue Entry Size 00:12:47.648 Max: 64 00:12:47.648 Min: 64 00:12:47.648 Completion Queue Entry Size 00:12:47.648 Max: 16 00:12:47.648 Min: 16 00:12:47.648 Number of Namespaces: 32 00:12:47.648 Compare Command: Supported 00:12:47.648 Write Uncorrectable Command: Not Supported 00:12:47.648 Dataset Management Command: Supported 00:12:47.648 Write Zeroes Command: Supported 00:12:47.648 Set Features Save Field: Not Supported 00:12:47.648 Reservations: Not Supported 00:12:47.648 Timestamp: Not Supported 00:12:47.648 Copy: Supported 00:12:47.648 Volatile Write Cache: Present 00:12:47.648 Atomic Write Unit (Normal): 1 00:12:47.648 Atomic Write Unit (PFail): 1 00:12:47.648 Atomic Compare & Write Unit: 1 00:12:47.648 Fused Compare & Write: Supported 00:12:47.648 Scatter-Gather List 00:12:47.648 SGL Command Set: Supported (Dword aligned) 00:12:47.648 SGL Keyed: Not Supported 00:12:47.648 SGL Bit Bucket Descriptor: Not Supported 00:12:47.648 SGL Metadata Pointer: Not Supported 00:12:47.648 Oversized SGL: Not Supported 00:12:47.648 SGL Metadata Address: Not Supported 00:12:47.648 SGL Offset: Not Supported 00:12:47.648 Transport SGL Data Block: Not Supported 00:12:47.648 Replay Protected Memory Block: Not Supported 00:12:47.648 00:12:47.648 Firmware Slot Information 00:12:47.648 ========================= 00:12:47.648 Active slot: 1 00:12:47.648 Slot 1 Firmware Revision: 25.01 00:12:47.648 00:12:47.648 00:12:47.648 Commands Supported and Effects 00:12:47.648 ============================== 00:12:47.648 Admin Commands 00:12:47.648 -------------- 00:12:47.648 Get Log Page (02h): Supported 00:12:47.648 Identify (06h): Supported 00:12:47.648 Abort (08h): Supported 00:12:47.648 Set Features (09h): Supported 00:12:47.648 Get Features (0Ah): Supported 00:12:47.648 Asynchronous Event Request (0Ch): Supported 00:12:47.648 Keep Alive (18h): Supported 00:12:47.648 I/O Commands 00:12:47.648 ------------ 00:12:47.648 Flush (00h): Supported LBA-Change 00:12:47.648 Write (01h): Supported LBA-Change 00:12:47.648 Read (02h): Supported 00:12:47.648 Compare (05h): Supported 00:12:47.648 Write Zeroes (08h): Supported LBA-Change 00:12:47.648 Dataset Management (09h): Supported LBA-Change 00:12:47.648 Copy (19h): Supported LBA-Change 00:12:47.648 00:12:47.648 Error Log 00:12:47.648 ========= 00:12:47.648 00:12:47.648 Arbitration 00:12:47.648 =========== 00:12:47.648 Arbitration Burst: 1 00:12:47.648 00:12:47.648 Power Management 00:12:47.648 ================ 00:12:47.648 Number of Power States: 1 00:12:47.648 Current Power State: Power State #0 00:12:47.648 Power State #0: 00:12:47.648 Max Power: 0.00 W 00:12:47.648 Non-Operational State: Operational 00:12:47.648 Entry Latency: Not Reported 00:12:47.648 Exit Latency: Not Reported 00:12:47.648 Relative Read Throughput: 0 00:12:47.648 Relative Read Latency: 0 00:12:47.648 Relative Write Throughput: 0 00:12:47.648 Relative Write Latency: 0 00:12:47.648 Idle Power: Not Reported 00:12:47.648 Active Power: Not Reported 00:12:47.648 Non-Operational Permissive Mode: Not Supported 00:12:47.648 00:12:47.648 Health Information 00:12:47.648 ================== 00:12:47.648 Critical Warnings: 00:12:47.648 Available Spare Space: OK 00:12:47.648 Temperature: OK 00:12:47.648 Device Reliability: OK 00:12:47.648 Read Only: No 00:12:47.648 Volatile Memory Backup: OK 00:12:47.648 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:47.648 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:47.648 Available Spare: 0% 00:12:47.648 Available Sp[2024-12-09 18:02:10.653745] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:47.648 [2024-12-09 18:02:10.661572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:47.648 [2024-12-09 18:02:10.661628] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:12:47.648 [2024-12-09 18:02:10.661646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.648 [2024-12-09 18:02:10.661658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.648 [2024-12-09 18:02:10.661668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.648 [2024-12-09 18:02:10.661678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.648 [2024-12-09 18:02:10.661769] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:47.648 [2024-12-09 18:02:10.661791] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:47.648 [2024-12-09 18:02:10.662769] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:47.648 [2024-12-09 18:02:10.662845] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:12:47.648 [2024-12-09 18:02:10.662874] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:12:47.648 [2024-12-09 18:02:10.663777] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:47.648 [2024-12-09 18:02:10.663803] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:12:47.648 [2024-12-09 18:02:10.663860] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:47.648 [2024-12-09 18:02:10.665120] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:47.958 are Threshold: 0% 00:12:47.958 Life Percentage Used: 0% 00:12:47.958 Data Units Read: 0 00:12:47.958 Data Units Written: 0 00:12:47.958 Host Read Commands: 0 00:12:47.958 Host Write Commands: 0 00:12:47.958 Controller Busy Time: 0 minutes 00:12:47.958 Power Cycles: 0 00:12:47.958 Power On Hours: 0 hours 00:12:47.958 Unsafe Shutdowns: 0 00:12:47.958 Unrecoverable Media Errors: 0 00:12:47.958 Lifetime Error Log Entries: 0 00:12:47.958 Warning Temperature Time: 0 minutes 00:12:47.958 Critical Temperature Time: 0 minutes 00:12:47.958 00:12:47.958 Number of Queues 00:12:47.958 ================ 00:12:47.958 Number of I/O Submission Queues: 127 00:12:47.958 Number of I/O Completion Queues: 127 00:12:47.958 00:12:47.958 Active Namespaces 00:12:47.958 ================= 00:12:47.958 Namespace ID:1 00:12:47.958 Error Recovery Timeout: Unlimited 00:12:47.958 Command Set Identifier: NVM (00h) 00:12:47.958 Deallocate: Supported 00:12:47.958 Deallocated/Unwritten Error: Not Supported 00:12:47.958 Deallocated Read Value: Unknown 00:12:47.958 Deallocate in Write Zeroes: Not Supported 00:12:47.958 Deallocated Guard Field: 0xFFFF 00:12:47.958 Flush: Supported 00:12:47.958 Reservation: Supported 00:12:47.958 Namespace Sharing Capabilities: Multiple Controllers 00:12:47.958 Size (in LBAs): 131072 (0GiB) 00:12:47.958 Capacity (in LBAs): 131072 (0GiB) 00:12:47.958 Utilization (in LBAs): 131072 (0GiB) 00:12:47.958 NGUID: D09A7CF0299543A08D9F1A059D93AD76 00:12:47.958 UUID: d09a7cf0-2995-43a0-8d9f-1a059d93ad76 00:12:47.958 Thin Provisioning: Not Supported 00:12:47.958 Per-NS Atomic Units: Yes 00:12:47.958 Atomic Boundary Size (Normal): 0 00:12:47.958 Atomic Boundary Size (PFail): 0 00:12:47.958 Atomic Boundary Offset: 0 00:12:47.958 Maximum Single Source Range Length: 65535 00:12:47.958 Maximum Copy Length: 65535 00:12:47.958 Maximum Source Range Count: 1 00:12:47.958 NGUID/EUI64 Never Reused: No 00:12:47.958 Namespace Write Protected: No 00:12:47.958 Number of LBA Formats: 1 00:12:47.958 Current LBA Format: LBA Format #00 00:12:47.958 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:47.958 00:12:47.958 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:47.958 [2024-12-09 18:02:10.907378] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:53.222 Initializing NVMe Controllers 00:12:53.222 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:53.222 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:53.222 Initialization complete. Launching workers. 00:12:53.222 ======================================================== 00:12:53.223 Latency(us) 00:12:53.223 Device Information : IOPS MiB/s Average min max 00:12:53.223 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31994.26 124.98 4000.00 1209.38 7483.56 00:12:53.223 ======================================================== 00:12:53.223 Total : 31994.26 124.98 4000.00 1209.38 7483.56 00:12:53.223 00:12:53.223 [2024-12-09 18:02:16.016947] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:53.223 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:53.481 [2024-12-09 18:02:16.275604] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:58.743 Initializing NVMe Controllers 00:12:58.744 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:58.744 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:58.744 Initialization complete. Launching workers. 00:12:58.744 ======================================================== 00:12:58.744 Latency(us) 00:12:58.744 Device Information : IOPS MiB/s Average min max 00:12:58.744 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30237.59 118.12 4233.28 1209.99 9783.90 00:12:58.744 ======================================================== 00:12:58.744 Total : 30237.59 118.12 4233.28 1209.99 9783.90 00:12:58.744 00:12:58.744 [2024-12-09 18:02:21.298111] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:58.744 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:58.744 [2024-12-09 18:02:21.540161] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:04.106 [2024-12-09 18:02:26.674704] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:04.106 Initializing NVMe Controllers 00:13:04.106 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:04.106 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:04.106 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:04.106 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:04.106 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:04.106 Initialization complete. Launching workers. 00:13:04.106 Starting thread on core 2 00:13:04.106 Starting thread on core 3 00:13:04.106 Starting thread on core 1 00:13:04.106 18:02:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:04.106 [2024-12-09 18:02:27.008025] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:07.390 [2024-12-09 18:02:30.203804] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:07.390 Initializing NVMe Controllers 00:13:07.390 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:07.390 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:07.390 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:07.390 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:07.391 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:07.391 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:07.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:07.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:07.391 Initialization complete. Launching workers. 00:13:07.391 Starting thread on core 1 with urgent priority queue 00:13:07.391 Starting thread on core 2 with urgent priority queue 00:13:07.391 Starting thread on core 3 with urgent priority queue 00:13:07.391 Starting thread on core 0 with urgent priority queue 00:13:07.391 SPDK bdev Controller (SPDK2 ) core 0: 5618.67 IO/s 17.80 secs/100000 ios 00:13:07.391 SPDK bdev Controller (SPDK2 ) core 1: 5360.00 IO/s 18.66 secs/100000 ios 00:13:07.391 SPDK bdev Controller (SPDK2 ) core 2: 5612.00 IO/s 17.82 secs/100000 ios 00:13:07.391 SPDK bdev Controller (SPDK2 ) core 3: 6072.67 IO/s 16.47 secs/100000 ios 00:13:07.391 ======================================================== 00:13:07.391 00:13:07.391 18:02:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:07.648 [2024-12-09 18:02:30.529104] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:07.648 Initializing NVMe Controllers 00:13:07.648 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:07.648 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:07.648 Namespace ID: 1 size: 0GB 00:13:07.648 Initialization complete. 00:13:07.648 INFO: using host memory buffer for IO 00:13:07.648 Hello world! 00:13:07.648 [2024-12-09 18:02:30.539310] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:07.648 18:02:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:07.905 [2024-12-09 18:02:30.843576] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:09.278 Initializing NVMe Controllers 00:13:09.278 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:09.278 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:09.278 Initialization complete. Launching workers. 00:13:09.278 submit (in ns) avg, min, max = 6228.0, 3535.6, 4017405.6 00:13:09.278 complete (in ns) avg, min, max = 30331.6, 2060.0, 5996260.0 00:13:09.278 00:13:09.278 Submit histogram 00:13:09.278 ================ 00:13:09.278 Range in us Cumulative Count 00:13:09.278 3.532 - 3.556: 0.1215% ( 15) 00:13:09.278 3.556 - 3.579: 0.7857% ( 82) 00:13:09.278 3.579 - 3.603: 2.6488% ( 230) 00:13:09.278 3.603 - 3.627: 6.5695% ( 484) 00:13:09.278 3.627 - 3.650: 13.6087% ( 869) 00:13:09.278 3.650 - 3.674: 22.6488% ( 1116) 00:13:09.278 3.674 - 3.698: 32.8392% ( 1258) 00:13:09.278 3.698 - 3.721: 41.0369% ( 1012) 00:13:09.278 3.721 - 3.745: 48.3516% ( 903) 00:13:09.278 3.745 - 3.769: 53.6249% ( 651) 00:13:09.278 3.769 - 3.793: 58.7768% ( 636) 00:13:09.278 3.793 - 3.816: 63.4670% ( 579) 00:13:09.278 3.816 - 3.840: 67.8574% ( 542) 00:13:09.278 3.840 - 3.864: 71.4702% ( 446) 00:13:09.278 3.864 - 3.887: 74.9615% ( 431) 00:13:09.278 3.887 - 3.911: 78.7606% ( 469) 00:13:09.278 3.911 - 3.935: 82.2438% ( 430) 00:13:09.278 3.935 - 3.959: 84.9008% ( 328) 00:13:09.278 3.959 - 3.982: 86.9502% ( 253) 00:13:09.278 3.982 - 4.006: 88.6432% ( 209) 00:13:09.278 4.006 - 4.030: 90.3362% ( 209) 00:13:09.278 4.030 - 4.053: 91.7456% ( 174) 00:13:09.278 4.053 - 4.077: 93.0579% ( 162) 00:13:09.278 4.077 - 4.101: 94.2730% ( 150) 00:13:09.278 4.101 - 4.124: 94.9534% ( 84) 00:13:09.278 4.124 - 4.148: 95.5691% ( 76) 00:13:09.278 4.148 - 4.172: 95.9903% ( 52) 00:13:09.278 4.172 - 4.196: 96.1847% ( 24) 00:13:09.278 4.196 - 4.219: 96.3548% ( 21) 00:13:09.278 4.219 - 4.243: 96.5087% ( 19) 00:13:09.278 4.243 - 4.267: 96.6302% ( 15) 00:13:09.278 4.267 - 4.290: 96.7922% ( 20) 00:13:09.278 4.290 - 4.314: 96.9056% ( 14) 00:13:09.278 4.314 - 4.338: 97.0190% ( 14) 00:13:09.278 4.338 - 4.361: 97.1405% ( 15) 00:13:09.278 4.361 - 4.385: 97.1891% ( 6) 00:13:09.278 4.385 - 4.409: 97.2134% ( 3) 00:13:09.278 4.409 - 4.433: 97.2701% ( 7) 00:13:09.278 4.433 - 4.456: 97.2783% ( 1) 00:13:09.278 4.456 - 4.480: 97.3107% ( 4) 00:13:09.278 4.480 - 4.504: 97.3350% ( 3) 00:13:09.278 4.504 - 4.527: 97.3674% ( 4) 00:13:09.278 4.551 - 4.575: 97.3755% ( 1) 00:13:09.278 4.575 - 4.599: 97.3917% ( 2) 00:13:09.278 4.599 - 4.622: 97.3998% ( 1) 00:13:09.278 4.622 - 4.646: 97.4079% ( 1) 00:13:09.278 4.646 - 4.670: 97.4160% ( 1) 00:13:09.278 4.693 - 4.717: 97.4484% ( 4) 00:13:09.278 4.717 - 4.741: 97.4889% ( 5) 00:13:09.278 4.741 - 4.764: 97.4970% ( 1) 00:13:09.278 4.764 - 4.788: 97.5213% ( 3) 00:13:09.278 4.788 - 4.812: 97.5780% ( 7) 00:13:09.278 4.812 - 4.836: 97.6347% ( 7) 00:13:09.278 4.836 - 4.859: 97.7076% ( 9) 00:13:09.278 4.859 - 4.883: 97.7481% ( 5) 00:13:09.278 4.883 - 4.907: 97.7805% ( 4) 00:13:09.278 4.907 - 4.930: 97.8372% ( 7) 00:13:09.278 4.930 - 4.954: 97.8777% ( 5) 00:13:09.278 4.954 - 4.978: 97.9263% ( 6) 00:13:09.278 4.978 - 5.001: 97.9911% ( 8) 00:13:09.278 5.001 - 5.025: 98.0235% ( 4) 00:13:09.278 5.025 - 5.049: 98.0721% ( 6) 00:13:09.278 5.049 - 5.073: 98.1612% ( 11) 00:13:09.278 5.073 - 5.096: 98.1855% ( 3) 00:13:09.278 5.096 - 5.120: 98.2260% ( 5) 00:13:09.278 5.120 - 5.144: 98.2422% ( 2) 00:13:09.279 5.144 - 5.167: 98.2746% ( 4) 00:13:09.279 5.167 - 5.191: 98.2989% ( 3) 00:13:09.279 5.191 - 5.215: 98.3313% ( 4) 00:13:09.279 5.215 - 5.239: 98.3556% ( 3) 00:13:09.279 5.239 - 5.262: 98.3637% ( 1) 00:13:09.279 5.262 - 5.286: 98.3880% ( 3) 00:13:09.279 5.286 - 5.310: 98.3961% ( 1) 00:13:09.279 5.310 - 5.333: 98.4123% ( 2) 00:13:09.279 5.333 - 5.357: 98.4285% ( 2) 00:13:09.279 5.357 - 5.381: 98.4447% ( 2) 00:13:09.279 5.381 - 5.404: 98.4528% ( 1) 00:13:09.279 5.452 - 5.476: 98.4609% ( 1) 00:13:09.279 5.499 - 5.523: 98.4690% ( 1) 00:13:09.279 5.523 - 5.547: 98.4771% ( 1) 00:13:09.279 5.641 - 5.665: 98.4852% ( 1) 00:13:09.279 5.736 - 5.760: 98.4933% ( 1) 00:13:09.279 5.855 - 5.879: 98.5014% ( 1) 00:13:09.279 5.926 - 5.950: 98.5095% ( 1) 00:13:09.279 5.950 - 5.973: 98.5176% ( 1) 00:13:09.279 5.973 - 5.997: 98.5257% ( 1) 00:13:09.279 6.163 - 6.210: 98.5338% ( 1) 00:13:09.279 6.400 - 6.447: 98.5419% ( 1) 00:13:09.279 6.447 - 6.495: 98.5500% ( 1) 00:13:09.279 6.495 - 6.542: 98.5581% ( 1) 00:13:09.279 6.542 - 6.590: 98.5662% ( 1) 00:13:09.279 6.921 - 6.969: 98.5743% ( 1) 00:13:09.279 7.111 - 7.159: 98.5824% ( 1) 00:13:09.279 7.159 - 7.206: 98.5905% ( 1) 00:13:09.279 7.253 - 7.301: 98.5986% ( 1) 00:13:09.279 7.301 - 7.348: 98.6067% ( 1) 00:13:09.279 7.538 - 7.585: 98.6148% ( 1) 00:13:09.279 7.585 - 7.633: 98.6229% ( 1) 00:13:09.279 7.633 - 7.680: 98.6310% ( 1) 00:13:09.279 7.680 - 7.727: 98.6472% ( 2) 00:13:09.279 7.822 - 7.870: 98.6634% ( 2) 00:13:09.279 7.917 - 7.964: 98.6715% ( 1) 00:13:09.279 8.107 - 8.154: 98.6796% ( 1) 00:13:09.279 8.201 - 8.249: 98.7039% ( 3) 00:13:09.279 8.296 - 8.344: 98.7120% ( 1) 00:13:09.279 8.391 - 8.439: 98.7282% ( 2) 00:13:09.279 8.439 - 8.486: 98.7363% ( 1) 00:13:09.279 8.533 - 8.581: 98.7525% ( 2) 00:13:09.279 8.913 - 8.960: 98.7606% ( 1) 00:13:09.279 9.007 - 9.055: 98.7687% ( 1) 00:13:09.279 9.102 - 9.150: 98.7849% ( 2) 00:13:09.279 9.244 - 9.292: 98.8011% ( 2) 00:13:09.279 9.434 - 9.481: 98.8092% ( 1) 00:13:09.279 9.481 - 9.529: 98.8173% ( 1) 00:13:09.279 9.529 - 9.576: 98.8254% ( 1) 00:13:09.279 9.576 - 9.624: 98.8497% ( 3) 00:13:09.279 9.624 - 9.671: 98.8659% ( 2) 00:13:09.279 9.671 - 9.719: 98.8740% ( 1) 00:13:09.279 9.813 - 9.861: 98.8902% ( 2) 00:13:09.279 9.861 - 9.908: 98.8983% ( 1) 00:13:09.279 9.908 - 9.956: 98.9064% ( 1) 00:13:09.279 10.003 - 10.050: 98.9145% ( 1) 00:13:09.279 10.098 - 10.145: 98.9226% ( 1) 00:13:09.279 10.145 - 10.193: 98.9388% ( 2) 00:13:09.279 10.193 - 10.240: 98.9469% ( 1) 00:13:09.279 10.714 - 10.761: 98.9550% ( 1) 00:13:09.279 10.856 - 10.904: 98.9631% ( 1) 00:13:09.279 10.999 - 11.046: 98.9712% ( 1) 00:13:09.279 11.046 - 11.093: 98.9793% ( 1) 00:13:09.279 11.141 - 11.188: 98.9874% ( 1) 00:13:09.279 11.236 - 11.283: 98.9955% ( 1) 00:13:09.279 11.378 - 11.425: 99.0036% ( 1) 00:13:09.279 11.425 - 11.473: 99.0117% ( 1) 00:13:09.279 11.662 - 11.710: 99.0198% ( 1) 00:13:09.279 11.852 - 11.899: 99.0279% ( 1) 00:13:09.279 12.041 - 12.089: 99.0360% ( 1) 00:13:09.279 12.231 - 12.326: 99.0522% ( 2) 00:13:09.279 12.516 - 12.610: 99.0603% ( 1) 00:13:09.279 12.705 - 12.800: 99.0684% ( 1) 00:13:09.279 12.895 - 12.990: 99.0765% ( 1) 00:13:09.279 12.990 - 13.084: 99.0846% ( 1) 00:13:09.279 13.084 - 13.179: 99.0928% ( 1) 00:13:09.279 13.369 - 13.464: 99.1090% ( 2) 00:13:09.279 14.033 - 14.127: 99.1252% ( 2) 00:13:09.279 14.317 - 14.412: 99.1333% ( 1) 00:13:09.279 14.507 - 14.601: 99.1414% ( 1) 00:13:09.279 14.601 - 14.696: 99.1495% ( 1) 00:13:09.279 14.791 - 14.886: 99.1576% ( 1) 00:13:09.279 15.644 - 15.739: 99.1657% ( 1) 00:13:09.279 16.972 - 17.067: 99.1738% ( 1) 00:13:09.279 17.161 - 17.256: 99.1819% ( 1) 00:13:09.279 17.256 - 17.351: 99.1900% ( 1) 00:13:09.279 17.446 - 17.541: 99.1981% ( 1) 00:13:09.279 17.541 - 17.636: 99.2629% ( 8) 00:13:09.279 17.636 - 17.730: 99.3196% ( 7) 00:13:09.279 17.730 - 17.825: 99.4006% ( 10) 00:13:09.279 17.825 - 17.920: 99.4492% ( 6) 00:13:09.279 17.920 - 18.015: 99.4897% ( 5) 00:13:09.279 18.015 - 18.110: 99.5140% ( 3) 00:13:09.279 18.110 - 18.204: 99.5788% ( 8) 00:13:09.279 18.204 - 18.299: 99.6355% ( 7) 00:13:09.279 18.299 - 18.394: 99.6841% ( 6) 00:13:09.279 18.394 - 18.489: 99.7408% ( 7) 00:13:09.279 18.489 - 18.584: 99.7813% ( 5) 00:13:09.279 18.584 - 18.679: 99.8056% ( 3) 00:13:09.279 18.679 - 18.773: 99.8137% ( 1) 00:13:09.279 18.773 - 18.868: 99.8218% ( 1) 00:13:09.279 18.868 - 18.963: 99.8380% ( 2) 00:13:09.279 19.058 - 19.153: 99.8542% ( 2) 00:13:09.279 19.247 - 19.342: 99.8623% ( 1) 00:13:09.279 19.342 - 19.437: 99.8704% ( 1) 00:13:09.279 19.437 - 19.532: 99.8785% ( 1) 00:13:09.279 19.532 - 19.627: 99.8866% ( 1) 00:13:09.279 19.627 - 19.721: 99.8947% ( 1) 00:13:09.279 20.196 - 20.290: 99.9028% ( 1) 00:13:09.279 22.566 - 22.661: 99.9109% ( 1) 00:13:09.279 22.945 - 23.040: 99.9190% ( 1) 00:13:09.279 26.738 - 26.927: 99.9271% ( 1) 00:13:09.279 27.876 - 28.065: 99.9352% ( 1) 00:13:09.279 32.237 - 32.427: 99.9433% ( 1) 00:13:09.279 3980.705 - 4004.978: 99.9595% ( 2) 00:13:09.279 4004.978 - 4029.250: 100.0000% ( 5) 00:13:09.279 00:13:09.279 Complete histogram 00:13:09.279 ================== 00:13:09.279 Range in us Cumulative Count 00:13:09.279 2.050 - 2.062: 0.0081% ( 1) 00:13:09.279 2.062 - 2.074: 4.1555% ( 512) 00:13:09.279 2.074 - 2.086: 29.0563% ( 3074) 00:13:09.279 2.086 - 2.098: 34.3297% ( 651) 00:13:09.279 2.098 - 2.110: 42.6164% ( 1023) 00:13:09.279 2.110 - 2.121: 55.6906% ( 1614) 00:13:09.279 2.121 - 2.133: 58.0721% ( 294) 00:13:09.279 2.133 - 2.145: 63.9449% ( 725) 00:13:09.279 2.145 - 2.157: 74.2811% ( 1276) 00:13:09.279 2.157 - 2.169: 76.1766% ( 234) 00:13:09.279 2.169 - 2.181: 80.5508% ( 540) 00:13:09.279 2.181 - 2.193: 85.3868% ( 597) 00:13:09.279 2.193 - 2.204: 86.2859% ( 111) 00:13:09.279 2.204 - 2.216: 87.6792% ( 172) 00:13:09.279 2.216 - 2.228: 90.1661% ( 307) 00:13:09.279 2.228 - 2.240: 91.9563% ( 221) 00:13:09.279 2.240 - 2.252: 92.7906% ( 103) 00:13:09.279 2.252 - 2.264: 93.4143% ( 77) 00:13:09.279 2.264 - 2.276: 93.6817% ( 33) 00:13:09.279 2.276 - 2.287: 93.9733% ( 36) 00:13:09.279 2.287 - 2.299: 94.3864% ( 51) 00:13:09.279 2.299 - 2.311: 94.8157% ( 53) 00:13:09.279 2.311 - 2.323: 94.9858% ( 21) 00:13:09.279 2.323 - 2.335: 95.0506% ( 8) 00:13:09.279 2.335 - 2.347: 95.1316% ( 10) 00:13:09.279 2.347 - 2.359: 95.2450% ( 14) 00:13:09.279 2.359 - 2.370: 95.3260% ( 10) 00:13:09.279 2.370 - 2.382: 95.3989% ( 9) 00:13:09.279 2.382 - 2.394: 95.5124% ( 14) 00:13:09.279 2.394 - 2.406: 95.6096% ( 12) 00:13:09.279 2.406 - 2.418: 95.6663% ( 7) 00:13:09.279 2.418 - 2.430: 95.7473% ( 10) 00:13:09.279 2.430 - 2.441: 95.9903% ( 30) 00:13:09.279 2.441 - 2.453: 96.1604% ( 21) 00:13:09.279 2.453 - 2.465: 96.3467% ( 23) 00:13:09.279 2.465 - 2.477: 96.5411% ( 24) 00:13:09.279 2.477 - 2.489: 96.6950% ( 19) 00:13:09.279 2.489 - 2.501: 96.9137% ( 27) 00:13:09.279 2.501 - 2.513: 97.0919% ( 22) 00:13:09.279 2.513 - 2.524: 97.2296% ( 17) 00:13:09.279 2.524 - 2.536: 97.4727% ( 30) 00:13:09.279 2.536 - 2.548: 97.6833% ( 26) 00:13:09.279 2.548 - 2.560: 97.8453% ( 20) 00:13:09.279 2.560 - 2.572: 97.9344% ( 11) 00:13:09.279 2.572 - 2.584: 98.0073% ( 9) 00:13:09.279 2.584 - 2.596: 98.0640% ( 7) 00:13:09.279 2.596 - 2.607: 98.1126% ( 6) 00:13:09.279 2.607 - 2.619: 98.1612% ( 6) 00:13:09.279 2.619 - 2.631: 98.1855% ( 3) 00:13:09.279 2.631 - 2.643: 98.2017% ( 2) 00:13:09.279 2.643 - 2.655: 98.2341% ( 4) 00:13:09.279 2.655 - 2.667: 98.2503% ( 2) 00:13:09.279 2.667 - 2.679: 98.2584% ( 1) 00:13:09.279 2.679 - 2.690: 98.2746% ( 2) 00:13:09.279 2.690 - 2.702: 98.2827% ( 1) 00:13:09.279 2.702 - 2.714: 98.2908% ( 1) 00:13:09.279 2.714 - 2.726: 98.3232% ( 4) 00:13:09.279 2.726 - 2.738: 98.3313% ( 1) 00:13:09.279 2.750 - 2.761: 98.3394% ( 1) 00:13:09.279 2.761 - 2.773: 98.3556% ( 2) 00:13:09.279 2.773 - 2.785: 98.3718% ( 2) 00:13:09.279 2.785 - 2.797: 98.3880% ( 2) 00:13:09.279 2.797 - 2.809: 98.3961% ( 1) 00:13:09.279 2.821 - 2.833: 98.4042% ( 1) 00:13:09.279 2.844 - 2.856: 98.4123% ( 1) 00:13:09.279 2.856 - 2.868: 98.4285% ( 2) 00:13:09.279 2.880 - 2.892: 98.4447% ( 2) 00:13:09.279 2.892 - 2.904: 98.4609% ( 2) 00:13:09.279 2.963 - 2.975: 98.4690% ( 1) 00:13:09.279 2.987 - 2.999: 9[2024-12-09 18:02:31.944268] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:09.279 8.4852% ( 2) 00:13:09.279 3.010 - 3.022: 98.4933% ( 1) 00:13:09.279 3.034 - 3.058: 98.5014% ( 1) 00:13:09.279 3.058 - 3.081: 98.5095% ( 1) 00:13:09.279 3.200 - 3.224: 98.5176% ( 1) 00:13:09.279 3.247 - 3.271: 98.5257% ( 1) 00:13:09.279 3.342 - 3.366: 98.5338% ( 1) 00:13:09.280 3.579 - 3.603: 98.5419% ( 1) 00:13:09.280 3.603 - 3.627: 98.5500% ( 1) 00:13:09.280 3.650 - 3.674: 98.5743% ( 3) 00:13:09.280 3.674 - 3.698: 98.5824% ( 1) 00:13:09.280 3.698 - 3.721: 98.5905% ( 1) 00:13:09.280 3.721 - 3.745: 98.5986% ( 1) 00:13:09.280 3.745 - 3.769: 98.6229% ( 3) 00:13:09.280 3.887 - 3.911: 98.6310% ( 1) 00:13:09.280 3.911 - 3.935: 98.6391% ( 1) 00:13:09.280 3.959 - 3.982: 98.6472% ( 1) 00:13:09.280 4.030 - 4.053: 98.6553% ( 1) 00:13:09.280 4.053 - 4.077: 98.6634% ( 1) 00:13:09.280 4.077 - 4.101: 98.6715% ( 1) 00:13:09.280 4.148 - 4.172: 98.6796% ( 1) 00:13:09.280 4.196 - 4.219: 98.7039% ( 3) 00:13:09.280 4.338 - 4.361: 98.7120% ( 1) 00:13:09.280 4.717 - 4.741: 98.7201% ( 1) 00:13:09.280 5.570 - 5.594: 98.7282% ( 1) 00:13:09.280 5.784 - 5.807: 98.7363% ( 1) 00:13:09.280 6.495 - 6.542: 98.7444% ( 1) 00:13:09.280 6.590 - 6.637: 98.7606% ( 2) 00:13:09.280 6.732 - 6.779: 98.7768% ( 2) 00:13:09.280 6.779 - 6.827: 98.7849% ( 1) 00:13:09.280 7.206 - 7.253: 98.7930% ( 1) 00:13:09.280 7.301 - 7.348: 98.8011% ( 1) 00:13:09.280 7.633 - 7.680: 98.8092% ( 1) 00:13:09.280 7.870 - 7.917: 98.8254% ( 2) 00:13:09.280 8.012 - 8.059: 98.8335% ( 1) 00:13:09.280 8.107 - 8.154: 98.8416% ( 1) 00:13:09.280 8.913 - 8.960: 98.8497% ( 1) 00:13:09.280 9.007 - 9.055: 98.8578% ( 1) 00:13:09.280 9.671 - 9.719: 98.8659% ( 1) 00:13:09.280 11.520 - 11.567: 98.8740% ( 1) 00:13:09.280 12.990 - 13.084: 98.8821% ( 1) 00:13:09.280 15.644 - 15.739: 98.8983% ( 2) 00:13:09.280 15.834 - 15.929: 98.9064% ( 1) 00:13:09.280 16.024 - 16.119: 98.9469% ( 5) 00:13:09.280 16.119 - 16.213: 98.9550% ( 1) 00:13:09.280 16.213 - 16.308: 98.9793% ( 3) 00:13:09.280 16.308 - 16.403: 98.9955% ( 2) 00:13:09.280 16.403 - 16.498: 99.0198% ( 3) 00:13:09.280 16.498 - 16.593: 99.0441% ( 3) 00:13:09.280 16.593 - 16.687: 99.0522% ( 1) 00:13:09.280 16.687 - 16.782: 99.1090% ( 7) 00:13:09.280 16.782 - 16.877: 99.1252% ( 2) 00:13:09.280 16.877 - 16.972: 99.1495% ( 3) 00:13:09.280 16.972 - 17.067: 99.1657% ( 2) 00:13:09.280 17.161 - 17.256: 99.1738% ( 1) 00:13:09.280 17.256 - 17.351: 99.1900% ( 2) 00:13:09.280 17.446 - 17.541: 99.1981% ( 1) 00:13:09.280 17.541 - 17.636: 99.2143% ( 2) 00:13:09.280 17.636 - 17.730: 99.2305% ( 2) 00:13:09.280 17.730 - 17.825: 99.2386% ( 1) 00:13:09.280 17.825 - 17.920: 99.2467% ( 1) 00:13:09.280 17.920 - 18.015: 99.2548% ( 1) 00:13:09.280 18.015 - 18.110: 99.2629% ( 1) 00:13:09.280 18.299 - 18.394: 99.2710% ( 1) 00:13:09.280 18.394 - 18.489: 99.2791% ( 1) 00:13:09.280 19.153 - 19.247: 99.2872% ( 1) 00:13:09.280 22.187 - 22.281: 99.2953% ( 1) 00:13:09.280 153.979 - 154.738: 99.3034% ( 1) 00:13:09.280 3762.252 - 3786.524: 99.3115% ( 1) 00:13:09.280 3980.705 - 4004.978: 99.8218% ( 63) 00:13:09.280 4004.978 - 4029.250: 99.9838% ( 20) 00:13:09.280 5000.154 - 5024.427: 99.9919% ( 1) 00:13:09.280 5995.330 - 6019.603: 100.0000% ( 1) 00:13:09.280 00:13:09.280 18:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:09.280 18:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:09.280 18:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:09.280 18:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:09.280 18:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:09.280 [ 00:13:09.280 { 00:13:09.280 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:09.280 "subtype": "Discovery", 00:13:09.280 "listen_addresses": [], 00:13:09.280 "allow_any_host": true, 00:13:09.280 "hosts": [] 00:13:09.280 }, 00:13:09.280 { 00:13:09.280 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:09.280 "subtype": "NVMe", 00:13:09.280 "listen_addresses": [ 00:13:09.280 { 00:13:09.280 "trtype": "VFIOUSER", 00:13:09.280 "adrfam": "IPv4", 00:13:09.280 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:09.280 "trsvcid": "0" 00:13:09.280 } 00:13:09.280 ], 00:13:09.280 "allow_any_host": true, 00:13:09.280 "hosts": [], 00:13:09.280 "serial_number": "SPDK1", 00:13:09.280 "model_number": "SPDK bdev Controller", 00:13:09.280 "max_namespaces": 32, 00:13:09.280 "min_cntlid": 1, 00:13:09.280 "max_cntlid": 65519, 00:13:09.280 "namespaces": [ 00:13:09.280 { 00:13:09.280 "nsid": 1, 00:13:09.280 "bdev_name": "Malloc1", 00:13:09.280 "name": "Malloc1", 00:13:09.280 "nguid": "46D5D539A3F344ABBFB06AC5D2229C76", 00:13:09.280 "uuid": "46d5d539-a3f3-44ab-bfb0-6ac5d2229c76" 00:13:09.280 }, 00:13:09.280 { 00:13:09.280 "nsid": 2, 00:13:09.280 "bdev_name": "Malloc3", 00:13:09.280 "name": "Malloc3", 00:13:09.280 "nguid": "EA09C69C3C8C4CCC8DE82DF570283721", 00:13:09.280 "uuid": "ea09c69c-3c8c-4ccc-8de8-2df570283721" 00:13:09.280 } 00:13:09.280 ] 00:13:09.280 }, 00:13:09.280 { 00:13:09.280 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:09.280 "subtype": "NVMe", 00:13:09.280 "listen_addresses": [ 00:13:09.280 { 00:13:09.280 "trtype": "VFIOUSER", 00:13:09.280 "adrfam": "IPv4", 00:13:09.280 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:09.280 "trsvcid": "0" 00:13:09.280 } 00:13:09.280 ], 00:13:09.280 "allow_any_host": true, 00:13:09.280 "hosts": [], 00:13:09.280 "serial_number": "SPDK2", 00:13:09.280 "model_number": "SPDK bdev Controller", 00:13:09.280 "max_namespaces": 32, 00:13:09.280 "min_cntlid": 1, 00:13:09.280 "max_cntlid": 65519, 00:13:09.280 "namespaces": [ 00:13:09.280 { 00:13:09.280 "nsid": 1, 00:13:09.280 "bdev_name": "Malloc2", 00:13:09.280 "name": "Malloc2", 00:13:09.280 "nguid": "D09A7CF0299543A08D9F1A059D93AD76", 00:13:09.280 "uuid": "d09a7cf0-2995-43a0-8d9f-1a059d93ad76" 00:13:09.280 } 00:13:09.280 ] 00:13:09.280 } 00:13:09.280 ] 00:13:09.280 18:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:09.280 18:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1448226 00:13:09.280 18:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:09.280 18:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:09.280 18:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:09.280 18:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:09.280 18:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:09.280 18:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:09.280 18:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:09.280 18:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:09.538 [2024-12-09 18:02:32.445024] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:09.538 Malloc4 00:13:09.796 18:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:09.796 [2024-12-09 18:02:32.830897] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:10.053 18:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:10.053 Asynchronous Event Request test 00:13:10.053 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:10.053 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:10.053 Registering asynchronous event callbacks... 00:13:10.053 Starting namespace attribute notice tests for all controllers... 00:13:10.053 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:10.053 aer_cb - Changed Namespace 00:13:10.053 Cleaning up... 00:13:10.311 [ 00:13:10.311 { 00:13:10.311 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:10.311 "subtype": "Discovery", 00:13:10.311 "listen_addresses": [], 00:13:10.311 "allow_any_host": true, 00:13:10.311 "hosts": [] 00:13:10.311 }, 00:13:10.311 { 00:13:10.311 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:10.311 "subtype": "NVMe", 00:13:10.311 "listen_addresses": [ 00:13:10.311 { 00:13:10.311 "trtype": "VFIOUSER", 00:13:10.311 "adrfam": "IPv4", 00:13:10.311 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:10.311 "trsvcid": "0" 00:13:10.311 } 00:13:10.311 ], 00:13:10.311 "allow_any_host": true, 00:13:10.311 "hosts": [], 00:13:10.311 "serial_number": "SPDK1", 00:13:10.311 "model_number": "SPDK bdev Controller", 00:13:10.311 "max_namespaces": 32, 00:13:10.311 "min_cntlid": 1, 00:13:10.311 "max_cntlid": 65519, 00:13:10.311 "namespaces": [ 00:13:10.311 { 00:13:10.311 "nsid": 1, 00:13:10.311 "bdev_name": "Malloc1", 00:13:10.311 "name": "Malloc1", 00:13:10.311 "nguid": "46D5D539A3F344ABBFB06AC5D2229C76", 00:13:10.311 "uuid": "46d5d539-a3f3-44ab-bfb0-6ac5d2229c76" 00:13:10.311 }, 00:13:10.311 { 00:13:10.311 "nsid": 2, 00:13:10.311 "bdev_name": "Malloc3", 00:13:10.311 "name": "Malloc3", 00:13:10.311 "nguid": "EA09C69C3C8C4CCC8DE82DF570283721", 00:13:10.311 "uuid": "ea09c69c-3c8c-4ccc-8de8-2df570283721" 00:13:10.311 } 00:13:10.311 ] 00:13:10.311 }, 00:13:10.311 { 00:13:10.311 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:10.311 "subtype": "NVMe", 00:13:10.311 "listen_addresses": [ 00:13:10.311 { 00:13:10.311 "trtype": "VFIOUSER", 00:13:10.311 "adrfam": "IPv4", 00:13:10.311 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:10.311 "trsvcid": "0" 00:13:10.311 } 00:13:10.311 ], 00:13:10.311 "allow_any_host": true, 00:13:10.311 "hosts": [], 00:13:10.311 "serial_number": "SPDK2", 00:13:10.311 "model_number": "SPDK bdev Controller", 00:13:10.311 "max_namespaces": 32, 00:13:10.311 "min_cntlid": 1, 00:13:10.311 "max_cntlid": 65519, 00:13:10.311 "namespaces": [ 00:13:10.311 { 00:13:10.311 "nsid": 1, 00:13:10.311 "bdev_name": "Malloc2", 00:13:10.311 "name": "Malloc2", 00:13:10.311 "nguid": "D09A7CF0299543A08D9F1A059D93AD76", 00:13:10.311 "uuid": "d09a7cf0-2995-43a0-8d9f-1a059d93ad76" 00:13:10.311 }, 00:13:10.311 { 00:13:10.311 "nsid": 2, 00:13:10.311 "bdev_name": "Malloc4", 00:13:10.311 "name": "Malloc4", 00:13:10.311 "nguid": "59CCFC7D7F9340069232D634886C8FA2", 00:13:10.311 "uuid": "59ccfc7d-7f93-4006-9232-d634886c8fa2" 00:13:10.311 } 00:13:10.311 ] 00:13:10.311 } 00:13:10.311 ] 00:13:10.311 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1448226 00:13:10.311 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:10.311 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1442622 00:13:10.311 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1442622 ']' 00:13:10.311 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1442622 00:13:10.311 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:10.311 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:10.311 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1442622 00:13:10.311 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:10.311 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:10.311 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1442622' 00:13:10.311 killing process with pid 1442622 00:13:10.311 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1442622 00:13:10.311 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1442622 00:13:10.569 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:10.569 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:10.569 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:10.569 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:10.569 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:10.569 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1448368 00:13:10.569 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:10.569 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1448368' 00:13:10.569 Process pid: 1448368 00:13:10.569 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:10.569 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1448368 00:13:10.569 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1448368 ']' 00:13:10.569 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.569 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:10.569 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.569 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:10.569 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:10.569 [2024-12-09 18:02:33.513291] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:10.569 [2024-12-09 18:02:33.514374] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:13:10.569 [2024-12-09 18:02:33.514446] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.569 [2024-12-09 18:02:33.587515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:10.828 [2024-12-09 18:02:33.649429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.828 [2024-12-09 18:02:33.649482] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.828 [2024-12-09 18:02:33.649504] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.828 [2024-12-09 18:02:33.649516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.828 [2024-12-09 18:02:33.649525] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.828 [2024-12-09 18:02:33.650967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.828 [2024-12-09 18:02:33.651039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.828 [2024-12-09 18:02:33.651069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.828 [2024-12-09 18:02:33.651072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.828 [2024-12-09 18:02:33.741786] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:10.828 [2024-12-09 18:02:33.741982] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:10.828 [2024-12-09 18:02:33.742289] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:10.828 [2024-12-09 18:02:33.742967] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:10.828 [2024-12-09 18:02:33.743166] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:10.828 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.828 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:10.828 18:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:11.763 18:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:12.333 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:12.333 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:12.333 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:12.333 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:12.333 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:12.594 Malloc1 00:13:12.594 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:12.852 18:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:13.110 18:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:13.367 18:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:13.367 18:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:13.367 18:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:13.625 Malloc2 00:13:13.625 18:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:13.882 18:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:14.140 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:14.397 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:14.397 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1448368 00:13:14.397 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1448368 ']' 00:13:14.397 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1448368 00:13:14.397 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:14.397 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:14.397 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1448368 00:13:14.655 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:14.655 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:14.655 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1448368' 00:13:14.655 killing process with pid 1448368 00:13:14.655 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1448368 00:13:14.655 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1448368 00:13:14.914 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:14.914 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:14.914 00:13:14.915 real 0m53.483s 00:13:14.915 user 3m26.669s 00:13:14.915 sys 0m3.805s 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:14.915 ************************************ 00:13:14.915 END TEST nvmf_vfio_user 00:13:14.915 ************************************ 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:14.915 ************************************ 00:13:14.915 START TEST nvmf_vfio_user_nvme_compliance 00:13:14.915 ************************************ 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:14.915 * Looking for test storage... 00:13:14.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:14.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.915 --rc genhtml_branch_coverage=1 00:13:14.915 --rc genhtml_function_coverage=1 00:13:14.915 --rc genhtml_legend=1 00:13:14.915 --rc geninfo_all_blocks=1 00:13:14.915 --rc geninfo_unexecuted_blocks=1 00:13:14.915 00:13:14.915 ' 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:14.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.915 --rc genhtml_branch_coverage=1 00:13:14.915 --rc genhtml_function_coverage=1 00:13:14.915 --rc genhtml_legend=1 00:13:14.915 --rc geninfo_all_blocks=1 00:13:14.915 --rc geninfo_unexecuted_blocks=1 00:13:14.915 00:13:14.915 ' 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:14.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.915 --rc genhtml_branch_coverage=1 00:13:14.915 --rc genhtml_function_coverage=1 00:13:14.915 --rc genhtml_legend=1 00:13:14.915 --rc geninfo_all_blocks=1 00:13:14.915 --rc geninfo_unexecuted_blocks=1 00:13:14.915 00:13:14.915 ' 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:14.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.915 --rc genhtml_branch_coverage=1 00:13:14.915 --rc genhtml_function_coverage=1 00:13:14.915 --rc genhtml_legend=1 00:13:14.915 --rc geninfo_all_blocks=1 00:13:14.915 --rc geninfo_unexecuted_blocks=1 00:13:14.915 00:13:14.915 ' 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.915 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:14.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1448981 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1448981' 00:13:14.916 Process pid: 1448981 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1448981 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1448981 ']' 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.916 18:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:15.175 [2024-12-09 18:02:37.975625] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:13:15.175 [2024-12-09 18:02:37.975706] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.175 [2024-12-09 18:02:38.045217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:15.175 [2024-12-09 18:02:38.103939] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.175 [2024-12-09 18:02:38.104011] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.175 [2024-12-09 18:02:38.104024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.175 [2024-12-09 18:02:38.104035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.175 [2024-12-09 18:02:38.104044] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.175 [2024-12-09 18:02:38.105460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.175 [2024-12-09 18:02:38.105574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.175 [2024-12-09 18:02:38.105579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.432 18:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.432 18:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:13:15.432 18:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:16.366 malloc0 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.366 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:16.624 00:13:16.624 00:13:16.624 CUnit - A unit testing framework for C - Version 2.1-3 00:13:16.624 http://cunit.sourceforge.net/ 00:13:16.624 00:13:16.624 00:13:16.624 Suite: nvme_compliance 00:13:16.624 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-09 18:02:39.470161] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:16.624 [2024-12-09 18:02:39.471669] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:16.624 [2024-12-09 18:02:39.471696] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:16.624 [2024-12-09 18:02:39.471708] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:16.624 [2024-12-09 18:02:39.473180] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:16.624 passed 00:13:16.624 Test: admin_identify_ctrlr_verify_fused ...[2024-12-09 18:02:39.557768] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:16.624 [2024-12-09 18:02:39.560790] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:16.624 passed 00:13:16.624 Test: admin_identify_ns ...[2024-12-09 18:02:39.648095] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:16.882 [2024-12-09 18:02:39.707565] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:16.882 [2024-12-09 18:02:39.715564] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:16.882 [2024-12-09 18:02:39.736692] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:16.882 passed 00:13:16.882 Test: admin_get_features_mandatory_features ...[2024-12-09 18:02:39.820705] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:16.882 [2024-12-09 18:02:39.823730] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:16.882 passed 00:13:16.882 Test: admin_get_features_optional_features ...[2024-12-09 18:02:39.908257] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:16.882 [2024-12-09 18:02:39.911276] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.140 passed 00:13:17.140 Test: admin_set_features_number_of_queues ...[2024-12-09 18:02:39.993340] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.140 [2024-12-09 18:02:40.097808] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.140 passed 00:13:17.398 Test: admin_get_log_page_mandatory_logs ...[2024-12-09 18:02:40.182634] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.398 [2024-12-09 18:02:40.185653] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.398 passed 00:13:17.398 Test: admin_get_log_page_with_lpo ...[2024-12-09 18:02:40.272186] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.398 [2024-12-09 18:02:40.340567] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:17.398 [2024-12-09 18:02:40.353625] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.398 passed 00:13:17.398 Test: fabric_property_get ...[2024-12-09 18:02:40.436106] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.398 [2024-12-09 18:02:40.437389] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:17.656 [2024-12-09 18:02:40.439133] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.656 passed 00:13:17.656 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-09 18:02:40.521678] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.656 [2024-12-09 18:02:40.522974] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:17.656 [2024-12-09 18:02:40.524707] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.656 passed 00:13:17.656 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-09 18:02:40.609017] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.656 [2024-12-09 18:02:40.692559] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:17.914 [2024-12-09 18:02:40.708558] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:17.914 [2024-12-09 18:02:40.713663] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.914 passed 00:13:17.914 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-09 18:02:40.796202] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.914 [2024-12-09 18:02:40.797504] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:17.914 [2024-12-09 18:02:40.799235] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.914 passed 00:13:17.914 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-09 18:02:40.884451] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.172 [2024-12-09 18:02:40.965567] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:18.172 [2024-12-09 18:02:40.989570] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:18.172 [2024-12-09 18:02:40.994656] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.172 passed 00:13:18.172 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-09 18:02:41.078231] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.172 [2024-12-09 18:02:41.079556] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:18.172 [2024-12-09 18:02:41.079596] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:18.172 [2024-12-09 18:02:41.081253] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.172 passed 00:13:18.172 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-09 18:02:41.162477] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.430 [2024-12-09 18:02:41.255577] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:18.430 [2024-12-09 18:02:41.263556] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:18.430 [2024-12-09 18:02:41.271574] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:18.430 [2024-12-09 18:02:41.279552] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:18.430 [2024-12-09 18:02:41.308679] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.430 passed 00:13:18.430 Test: admin_create_io_sq_verify_pc ...[2024-12-09 18:02:41.391825] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.430 [2024-12-09 18:02:41.408569] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:18.430 [2024-12-09 18:02:41.426227] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.430 passed 00:13:18.688 Test: admin_create_io_qp_max_qps ...[2024-12-09 18:02:41.509784] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:19.621 [2024-12-09 18:02:42.610566] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:13:20.185 [2024-12-09 18:02:42.989240] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:20.185 passed 00:13:20.185 Test: admin_create_io_sq_shared_cq ...[2024-12-09 18:02:43.071560] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:20.185 [2024-12-09 18:02:43.205551] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:20.443 [2024-12-09 18:02:43.242644] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:20.443 passed 00:13:20.443 00:13:20.443 Run Summary: Type Total Ran Passed Failed Inactive 00:13:20.443 suites 1 1 n/a 0 0 00:13:20.443 tests 18 18 18 0 0 00:13:20.443 asserts 360 360 360 0 n/a 00:13:20.443 00:13:20.443 Elapsed time = 1.566 seconds 00:13:20.443 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1448981 00:13:20.443 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1448981 ']' 00:13:20.443 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1448981 00:13:20.443 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:13:20.443 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.443 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1448981 00:13:20.443 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:20.443 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:20.443 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1448981' 00:13:20.443 killing process with pid 1448981 00:13:20.443 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1448981 00:13:20.443 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1448981 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:20.704 00:13:20.704 real 0m5.789s 00:13:20.704 user 0m16.242s 00:13:20.704 sys 0m0.523s 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:20.704 ************************************ 00:13:20.704 END TEST nvmf_vfio_user_nvme_compliance 00:13:20.704 ************************************ 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:20.704 ************************************ 00:13:20.704 START TEST nvmf_vfio_user_fuzz 00:13:20.704 ************************************ 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:20.704 * Looking for test storage... 00:13:20.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:13:20.704 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:20.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.705 --rc genhtml_branch_coverage=1 00:13:20.705 --rc genhtml_function_coverage=1 00:13:20.705 --rc genhtml_legend=1 00:13:20.705 --rc geninfo_all_blocks=1 00:13:20.705 --rc geninfo_unexecuted_blocks=1 00:13:20.705 00:13:20.705 ' 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:20.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.705 --rc genhtml_branch_coverage=1 00:13:20.705 --rc genhtml_function_coverage=1 00:13:20.705 --rc genhtml_legend=1 00:13:20.705 --rc geninfo_all_blocks=1 00:13:20.705 --rc geninfo_unexecuted_blocks=1 00:13:20.705 00:13:20.705 ' 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:20.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.705 --rc genhtml_branch_coverage=1 00:13:20.705 --rc genhtml_function_coverage=1 00:13:20.705 --rc genhtml_legend=1 00:13:20.705 --rc geninfo_all_blocks=1 00:13:20.705 --rc geninfo_unexecuted_blocks=1 00:13:20.705 00:13:20.705 ' 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:20.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.705 --rc genhtml_branch_coverage=1 00:13:20.705 --rc genhtml_function_coverage=1 00:13:20.705 --rc genhtml_legend=1 00:13:20.705 --rc geninfo_all_blocks=1 00:13:20.705 --rc geninfo_unexecuted_blocks=1 00:13:20.705 00:13:20.705 ' 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.705 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:20.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1449816 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1449816' 00:13:20.963 Process pid: 1449816 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1449816 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1449816 ']' 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.963 18:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:21.221 18:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:21.221 18:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:13:21.221 18:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:22.155 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:22.155 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.155 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:22.155 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.155 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:22.155 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:22.155 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.155 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:22.155 malloc0 00:13:22.155 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.155 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:22.155 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.155 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:22.155 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.155 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:22.155 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.155 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:22.155 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.155 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:22.155 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.155 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:22.155 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.156 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:22.156 18:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:54.222 Fuzzing completed. Shutting down the fuzz application 00:13:54.222 00:13:54.222 Dumping successful admin opcodes: 00:13:54.222 9, 10, 00:13:54.222 Dumping successful io opcodes: 00:13:54.222 0, 00:13:54.222 NS: 0x20000081ef00 I/O qp, Total commands completed: 633185, total successful commands: 2453, random_seed: 482246784 00:13:54.222 NS: 0x20000081ef00 admin qp, Total commands completed: 87456, total successful commands: 20, random_seed: 246911232 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1449816 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1449816 ']' 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1449816 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1449816 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1449816' 00:13:54.222 killing process with pid 1449816 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1449816 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1449816 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:54.222 00:13:54.222 real 0m32.182s 00:13:54.222 user 0m34.470s 00:13:54.222 sys 0m25.206s 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:54.222 ************************************ 00:13:54.222 END TEST nvmf_vfio_user_fuzz 00:13:54.222 ************************************ 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:54.222 ************************************ 00:13:54.222 START TEST nvmf_auth_target 00:13:54.222 ************************************ 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:54.222 * Looking for test storage... 00:13:54.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:13:54.222 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:54.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.223 --rc genhtml_branch_coverage=1 00:13:54.223 --rc genhtml_function_coverage=1 00:13:54.223 --rc genhtml_legend=1 00:13:54.223 --rc geninfo_all_blocks=1 00:13:54.223 --rc geninfo_unexecuted_blocks=1 00:13:54.223 00:13:54.223 ' 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:54.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.223 --rc genhtml_branch_coverage=1 00:13:54.223 --rc genhtml_function_coverage=1 00:13:54.223 --rc genhtml_legend=1 00:13:54.223 --rc geninfo_all_blocks=1 00:13:54.223 --rc geninfo_unexecuted_blocks=1 00:13:54.223 00:13:54.223 ' 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:54.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.223 --rc genhtml_branch_coverage=1 00:13:54.223 --rc genhtml_function_coverage=1 00:13:54.223 --rc genhtml_legend=1 00:13:54.223 --rc geninfo_all_blocks=1 00:13:54.223 --rc geninfo_unexecuted_blocks=1 00:13:54.223 00:13:54.223 ' 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:54.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.223 --rc genhtml_branch_coverage=1 00:13:54.223 --rc genhtml_function_coverage=1 00:13:54.223 --rc genhtml_legend=1 00:13:54.223 --rc geninfo_all_blocks=1 00:13:54.223 --rc geninfo_unexecuted_blocks=1 00:13:54.223 00:13:54.223 ' 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.223 18:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:54.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:13:54.223 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:55.605 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:55.605 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:55.605 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.605 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:55.605 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:55.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:13:55.606 00:13:55.606 --- 10.0.0.2 ping statistics --- 00:13:55.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.606 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:55.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:13:55.606 00:13:55.606 --- 10.0.0.1 ping statistics --- 00:13:55.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.606 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1455779 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1455779 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1455779 ']' 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:55.606 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.865 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.865 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:55.865 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:55.865 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:55.865 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.865 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.865 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1455824 00:13:55.865 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:55.865 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:55.865 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:13:55.865 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:55.865 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=960ba004ffd1aa6f2f0209f373ca04c0b6d1ac46f010db51 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.cDc 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 960ba004ffd1aa6f2f0209f373ca04c0b6d1ac46f010db51 0 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 960ba004ffd1aa6f2f0209f373ca04c0b6d1ac46f010db51 0 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=960ba004ffd1aa6f2f0209f373ca04c0b6d1ac46f010db51 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.cDc 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.cDc 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.cDc 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=735ebfe9c5eede47a625b26324b06851ce590e6035d1dbb5150bd7a512c2b705 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.IPv 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 735ebfe9c5eede47a625b26324b06851ce590e6035d1dbb5150bd7a512c2b705 3 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 735ebfe9c5eede47a625b26324b06851ce590e6035d1dbb5150bd7a512c2b705 3 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=735ebfe9c5eede47a625b26324b06851ce590e6035d1dbb5150bd7a512c2b705 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.IPv 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.IPv 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.IPv 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=96dafb42c800f854822ce8a84720a095 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.JFB 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 96dafb42c800f854822ce8a84720a095 1 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 96dafb42c800f854822ce8a84720a095 1 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=96dafb42c800f854822ce8a84720a095 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.JFB 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.JFB 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.JFB 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=087be724d35a96e77b2b93b9900286a53ee6ac711d5c3670 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.25T 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 087be724d35a96e77b2b93b9900286a53ee6ac711d5c3670 2 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 087be724d35a96e77b2b93b9900286a53ee6ac711d5c3670 2 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=087be724d35a96e77b2b93b9900286a53ee6ac711d5c3670 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:55.866 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.25T 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.25T 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.25T 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=51a44ecfc2c05f4cfc77b30317e12250875104bf16c3dd7c 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.wmj 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 51a44ecfc2c05f4cfc77b30317e12250875104bf16c3dd7c 2 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 51a44ecfc2c05f4cfc77b30317e12250875104bf16c3dd7c 2 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=51a44ecfc2c05f4cfc77b30317e12250875104bf16c3dd7c 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.wmj 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.wmj 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.wmj 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2e091bcc288052f1ae2729ed5db24cf2 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.wge 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2e091bcc288052f1ae2729ed5db24cf2 1 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2e091bcc288052f1ae2729ed5db24cf2 1 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2e091bcc288052f1ae2729ed5db24cf2 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:56.125 18:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:56.125 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.wge 00:13:56.125 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.wge 00:13:56.125 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.wge 00:13:56.125 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:13:56.125 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6083af6411f7f22f0ddc6d3291183b57cb4841ccb714459aa02b53fb8c2c7807 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.b5b 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6083af6411f7f22f0ddc6d3291183b57cb4841ccb714459aa02b53fb8c2c7807 3 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6083af6411f7f22f0ddc6d3291183b57cb4841ccb714459aa02b53fb8c2c7807 3 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6083af6411f7f22f0ddc6d3291183b57cb4841ccb714459aa02b53fb8c2c7807 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.b5b 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.b5b 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.b5b 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1455779 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1455779 ']' 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:56.126 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.384 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:56.384 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:56.384 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1455824 /var/tmp/host.sock 00:13:56.384 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1455824 ']' 00:13:56.384 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:56.384 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:56.384 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:56.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:56.384 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:56.384 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.642 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:56.642 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:56.642 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:13:56.642 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.642 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.642 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.642 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:56.642 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cDc 00:13:56.642 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.642 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.642 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.642 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.cDc 00:13:56.642 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.cDc 00:13:57.207 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.IPv ]] 00:13:57.208 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IPv 00:13:57.208 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.208 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.208 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.208 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IPv 00:13:57.208 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IPv 00:13:57.208 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:57.208 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.JFB 00:13:57.208 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.208 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.466 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.466 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.JFB 00:13:57.466 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.JFB 00:13:57.724 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.25T ]] 00:13:57.724 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.25T 00:13:57.724 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.724 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.724 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.724 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.25T 00:13:57.724 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.25T 00:13:57.982 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:57.982 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.wmj 00:13:57.982 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.982 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.982 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.982 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.wmj 00:13:57.982 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.wmj 00:13:58.240 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.wge ]] 00:13:58.240 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wge 00:13:58.240 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.240 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.240 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.240 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wge 00:13:58.240 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wge 00:13:58.497 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:58.497 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.b5b 00:13:58.497 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.497 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.498 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.498 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.b5b 00:13:58.498 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.b5b 00:13:58.755 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:13:58.755 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:58.755 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:58.756 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:58.756 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:58.756 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:59.013 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:13:59.013 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:59.013 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:59.013 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:59.013 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:59.013 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.013 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:59.013 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.013 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.013 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.013 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:59.013 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:59.013 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:59.271 00:13:59.271 18:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:59.271 18:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.271 18:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.528 18:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.528 18:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.528 18:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.528 18:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.528 18:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.528 18:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.529 { 00:13:59.529 "cntlid": 1, 00:13:59.529 "qid": 0, 00:13:59.529 "state": "enabled", 00:13:59.529 "thread": "nvmf_tgt_poll_group_000", 00:13:59.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:13:59.529 "listen_address": { 00:13:59.529 "trtype": "TCP", 00:13:59.529 "adrfam": "IPv4", 00:13:59.529 "traddr": "10.0.0.2", 00:13:59.529 "trsvcid": "4420" 00:13:59.529 }, 00:13:59.529 "peer_address": { 00:13:59.529 "trtype": "TCP", 00:13:59.529 "adrfam": "IPv4", 00:13:59.529 "traddr": "10.0.0.1", 00:13:59.529 "trsvcid": "33676" 00:13:59.529 }, 00:13:59.529 "auth": { 00:13:59.529 "state": "completed", 00:13:59.529 "digest": "sha256", 00:13:59.529 "dhgroup": "null" 00:13:59.529 } 00:13:59.529 } 00:13:59.529 ]' 00:13:59.529 18:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.787 18:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:59.787 18:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.787 18:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:59.787 18:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.787 18:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.787 18:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.787 18:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.045 18:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:14:00.045 18:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:14:00.980 18:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.980 18:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:00.980 18:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.980 18:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.980 18:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.980 18:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:00.980 18:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:00.980 18:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:01.268 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:01.268 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:01.268 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:01.268 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:01.268 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:01.268 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.268 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.268 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.268 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.268 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.268 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.268 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.268 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.551 00:14:01.551 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.551 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.551 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.809 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.809 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.809 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.809 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.809 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.809 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.809 { 00:14:01.809 "cntlid": 3, 00:14:01.809 "qid": 0, 00:14:01.809 "state": "enabled", 00:14:01.809 "thread": "nvmf_tgt_poll_group_000", 00:14:01.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:01.809 "listen_address": { 00:14:01.809 "trtype": "TCP", 00:14:01.809 "adrfam": "IPv4", 00:14:01.809 "traddr": "10.0.0.2", 00:14:01.809 "trsvcid": "4420" 00:14:01.809 }, 00:14:01.809 "peer_address": { 00:14:01.809 "trtype": "TCP", 00:14:01.809 "adrfam": "IPv4", 00:14:01.809 "traddr": "10.0.0.1", 00:14:01.809 "trsvcid": "33688" 00:14:01.809 }, 00:14:01.809 "auth": { 00:14:01.809 "state": "completed", 00:14:01.809 "digest": "sha256", 00:14:01.809 "dhgroup": "null" 00:14:01.809 } 00:14:01.809 } 00:14:01.809 ]' 00:14:01.809 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.809 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:01.809 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.809 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:01.809 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.809 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.809 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.809 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.067 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:14:02.067 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:14:02.999 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.999 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:02.999 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.000 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.000 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.000 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:03.000 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:03.000 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:03.257 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:03.257 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:03.257 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:03.257 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:03.257 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:03.257 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.257 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.257 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.257 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.257 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.257 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.257 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.257 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.823 00:14:03.823 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.823 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:03.823 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.081 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.081 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.081 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.081 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.081 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.081 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:04.081 { 00:14:04.081 "cntlid": 5, 00:14:04.081 "qid": 0, 00:14:04.081 "state": "enabled", 00:14:04.081 "thread": "nvmf_tgt_poll_group_000", 00:14:04.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:04.081 "listen_address": { 00:14:04.081 "trtype": "TCP", 00:14:04.081 "adrfam": "IPv4", 00:14:04.081 "traddr": "10.0.0.2", 00:14:04.081 "trsvcid": "4420" 00:14:04.081 }, 00:14:04.081 "peer_address": { 00:14:04.081 "trtype": "TCP", 00:14:04.081 "adrfam": "IPv4", 00:14:04.081 "traddr": "10.0.0.1", 00:14:04.081 "trsvcid": "33724" 00:14:04.081 }, 00:14:04.081 "auth": { 00:14:04.081 "state": "completed", 00:14:04.081 "digest": "sha256", 00:14:04.081 "dhgroup": "null" 00:14:04.081 } 00:14:04.081 } 00:14:04.081 ]' 00:14:04.081 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:04.081 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:04.081 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:04.081 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:04.081 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:04.081 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.081 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.081 18:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.339 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:14:04.339 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:14:05.272 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.272 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:05.272 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.272 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.272 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.272 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:05.272 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:05.272 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:05.530 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:05.530 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.530 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:05.530 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:05.530 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:05.530 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.530 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:05.530 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.530 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.530 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.530 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:05.530 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:05.530 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:05.788 00:14:05.788 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.788 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:05.788 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.046 18:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.046 18:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.046 18:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.046 18:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.046 18:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.046 18:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.046 { 00:14:06.046 "cntlid": 7, 00:14:06.046 "qid": 0, 00:14:06.046 "state": "enabled", 00:14:06.046 "thread": "nvmf_tgt_poll_group_000", 00:14:06.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:06.046 "listen_address": { 00:14:06.046 "trtype": "TCP", 00:14:06.046 "adrfam": "IPv4", 00:14:06.046 "traddr": "10.0.0.2", 00:14:06.046 "trsvcid": "4420" 00:14:06.046 }, 00:14:06.046 "peer_address": { 00:14:06.046 "trtype": "TCP", 00:14:06.046 "adrfam": "IPv4", 00:14:06.046 "traddr": "10.0.0.1", 00:14:06.046 "trsvcid": "33746" 00:14:06.046 }, 00:14:06.046 "auth": { 00:14:06.046 "state": "completed", 00:14:06.046 "digest": "sha256", 00:14:06.046 "dhgroup": "null" 00:14:06.046 } 00:14:06.046 } 00:14:06.046 ]' 00:14:06.046 18:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.304 18:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:06.304 18:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.304 18:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:06.304 18:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.304 18:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.304 18:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.304 18:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.562 18:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:14:06.562 18:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:14:07.496 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.496 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:07.496 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.496 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.496 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.496 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:07.496 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.496 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:07.496 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:07.754 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:07.754 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.754 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:07.754 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:07.754 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:07.754 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.754 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.754 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.754 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.754 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.754 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.754 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.754 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:08.320 00:14:08.320 18:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.320 18:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.320 18:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.578 18:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.578 18:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.578 18:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.578 18:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.578 18:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.578 18:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:08.578 { 00:14:08.578 "cntlid": 9, 00:14:08.578 "qid": 0, 00:14:08.578 "state": "enabled", 00:14:08.578 "thread": "nvmf_tgt_poll_group_000", 00:14:08.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:08.578 "listen_address": { 00:14:08.578 "trtype": "TCP", 00:14:08.578 "adrfam": "IPv4", 00:14:08.578 "traddr": "10.0.0.2", 00:14:08.578 "trsvcid": "4420" 00:14:08.578 }, 00:14:08.578 "peer_address": { 00:14:08.578 "trtype": "TCP", 00:14:08.578 "adrfam": "IPv4", 00:14:08.578 "traddr": "10.0.0.1", 00:14:08.578 "trsvcid": "33768" 00:14:08.578 }, 00:14:08.578 "auth": { 00:14:08.578 "state": "completed", 00:14:08.578 "digest": "sha256", 00:14:08.578 "dhgroup": "ffdhe2048" 00:14:08.578 } 00:14:08.578 } 00:14:08.578 ]' 00:14:08.578 18:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.578 18:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:08.578 18:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.578 18:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:08.578 18:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.578 18:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.578 18:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.578 18:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.836 18:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:14:08.836 18:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:14:09.769 18:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.769 18:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:09.769 18:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.769 18:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.769 18:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.769 18:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.769 18:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:09.769 18:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:10.028 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:10.028 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.028 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:10.028 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:10.028 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:10.028 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.028 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:10.028 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.028 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.028 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.028 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:10.028 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:10.028 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:10.593 00:14:10.593 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.593 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:10.593 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.851 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.851 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.851 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.851 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.851 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.851 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:10.851 { 00:14:10.851 "cntlid": 11, 00:14:10.851 "qid": 0, 00:14:10.851 "state": "enabled", 00:14:10.851 "thread": "nvmf_tgt_poll_group_000", 00:14:10.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:10.851 "listen_address": { 00:14:10.851 "trtype": "TCP", 00:14:10.851 "adrfam": "IPv4", 00:14:10.851 "traddr": "10.0.0.2", 00:14:10.851 "trsvcid": "4420" 00:14:10.851 }, 00:14:10.851 "peer_address": { 00:14:10.851 "trtype": "TCP", 00:14:10.851 "adrfam": "IPv4", 00:14:10.851 "traddr": "10.0.0.1", 00:14:10.851 "trsvcid": "60304" 00:14:10.851 }, 00:14:10.851 "auth": { 00:14:10.851 "state": "completed", 00:14:10.851 "digest": "sha256", 00:14:10.851 "dhgroup": "ffdhe2048" 00:14:10.851 } 00:14:10.851 } 00:14:10.851 ]' 00:14:10.851 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:10.851 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:10.851 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:10.851 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:10.851 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:10.851 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.851 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.851 18:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.109 18:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:14:11.109 18:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:14:12.042 18:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.042 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:12.042 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.042 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.042 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.042 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.042 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:12.042 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:12.608 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:12.608 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.608 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:12.608 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:12.608 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:12.608 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.608 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.608 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.608 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.608 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.608 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.608 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.608 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.865 00:14:12.866 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.866 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.866 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.123 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.123 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.123 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.123 18:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.123 18:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.123 18:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.123 { 00:14:13.123 "cntlid": 13, 00:14:13.123 "qid": 0, 00:14:13.123 "state": "enabled", 00:14:13.123 "thread": "nvmf_tgt_poll_group_000", 00:14:13.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:13.123 "listen_address": { 00:14:13.123 "trtype": "TCP", 00:14:13.123 "adrfam": "IPv4", 00:14:13.124 "traddr": "10.0.0.2", 00:14:13.124 "trsvcid": "4420" 00:14:13.124 }, 00:14:13.124 "peer_address": { 00:14:13.124 "trtype": "TCP", 00:14:13.124 "adrfam": "IPv4", 00:14:13.124 "traddr": "10.0.0.1", 00:14:13.124 "trsvcid": "60328" 00:14:13.124 }, 00:14:13.124 "auth": { 00:14:13.124 "state": "completed", 00:14:13.124 "digest": "sha256", 00:14:13.124 "dhgroup": "ffdhe2048" 00:14:13.124 } 00:14:13.124 } 00:14:13.124 ]' 00:14:13.124 18:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.124 18:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:13.124 18:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.124 18:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:13.124 18:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.124 18:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.124 18:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.124 18:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.688 18:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:14:13.688 18:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:14:14.621 18:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.621 18:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:14.621 18:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.621 18:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.621 18:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.621 18:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:14.621 18:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:14.621 18:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:14.878 18:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:14.878 18:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:14.878 18:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:14.878 18:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:14.878 18:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:14.878 18:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.878 18:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:14.878 18:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.878 18:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.878 18:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.878 18:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:14.878 18:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:14.878 18:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:15.136 00:14:15.136 18:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:15.136 18:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:15.136 18:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.393 18:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.393 18:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.393 18:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.393 18:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.393 18:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.393 18:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.393 { 00:14:15.393 "cntlid": 15, 00:14:15.393 "qid": 0, 00:14:15.393 "state": "enabled", 00:14:15.393 "thread": "nvmf_tgt_poll_group_000", 00:14:15.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:15.393 "listen_address": { 00:14:15.393 "trtype": "TCP", 00:14:15.393 "adrfam": "IPv4", 00:14:15.393 "traddr": "10.0.0.2", 00:14:15.393 "trsvcid": "4420" 00:14:15.393 }, 00:14:15.393 "peer_address": { 00:14:15.393 "trtype": "TCP", 00:14:15.393 "adrfam": "IPv4", 00:14:15.393 "traddr": "10.0.0.1", 00:14:15.393 "trsvcid": "60354" 00:14:15.393 }, 00:14:15.393 "auth": { 00:14:15.393 "state": "completed", 00:14:15.393 "digest": "sha256", 00:14:15.393 "dhgroup": "ffdhe2048" 00:14:15.393 } 00:14:15.393 } 00:14:15.393 ]' 00:14:15.393 18:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.393 18:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:15.393 18:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.393 18:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:15.393 18:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:15.651 18:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.651 18:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.651 18:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.909 18:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:14:15.909 18:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:14:16.840 18:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.840 18:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:16.840 18:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.840 18:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.840 18:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.840 18:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:16.841 18:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.841 18:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:16.841 18:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:17.098 18:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:17.098 18:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.098 18:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:17.098 18:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:17.098 18:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:17.098 18:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.098 18:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.098 18:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.098 18:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.098 18:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.098 18:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.098 18:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.098 18:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.355 00:14:17.355 18:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:17.355 18:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.355 18:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:17.613 18:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.613 18:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.613 18:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.613 18:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.613 18:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.613 18:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.613 { 00:14:17.613 "cntlid": 17, 00:14:17.613 "qid": 0, 00:14:17.613 "state": "enabled", 00:14:17.613 "thread": "nvmf_tgt_poll_group_000", 00:14:17.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:17.613 "listen_address": { 00:14:17.613 "trtype": "TCP", 00:14:17.613 "adrfam": "IPv4", 00:14:17.613 "traddr": "10.0.0.2", 00:14:17.613 "trsvcid": "4420" 00:14:17.613 }, 00:14:17.613 "peer_address": { 00:14:17.613 "trtype": "TCP", 00:14:17.613 "adrfam": "IPv4", 00:14:17.613 "traddr": "10.0.0.1", 00:14:17.613 "trsvcid": "60376" 00:14:17.613 }, 00:14:17.613 "auth": { 00:14:17.613 "state": "completed", 00:14:17.613 "digest": "sha256", 00:14:17.613 "dhgroup": "ffdhe3072" 00:14:17.613 } 00:14:17.613 } 00:14:17.613 ]' 00:14:17.613 18:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.871 18:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:17.871 18:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.871 18:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:17.871 18:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.871 18:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.871 18:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.871 18:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.128 18:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:14:18.128 18:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:14:19.060 18:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.060 18:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:19.060 18:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.061 18:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.061 18:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.061 18:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:19.061 18:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:19.061 18:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:19.318 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:19.318 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:19.318 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:19.318 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:19.318 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:19.318 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.318 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.318 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.318 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.318 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.318 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.318 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.318 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.576 00:14:19.576 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.576 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.576 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.834 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.834 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.834 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.834 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.834 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.834 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.834 { 00:14:19.834 "cntlid": 19, 00:14:19.834 "qid": 0, 00:14:19.834 "state": "enabled", 00:14:19.834 "thread": "nvmf_tgt_poll_group_000", 00:14:19.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:19.834 "listen_address": { 00:14:19.834 "trtype": "TCP", 00:14:19.834 "adrfam": "IPv4", 00:14:19.834 "traddr": "10.0.0.2", 00:14:19.834 "trsvcid": "4420" 00:14:19.834 }, 00:14:19.834 "peer_address": { 00:14:19.834 "trtype": "TCP", 00:14:19.834 "adrfam": "IPv4", 00:14:19.834 "traddr": "10.0.0.1", 00:14:19.834 "trsvcid": "37052" 00:14:19.834 }, 00:14:19.834 "auth": { 00:14:19.834 "state": "completed", 00:14:19.834 "digest": "sha256", 00:14:19.834 "dhgroup": "ffdhe3072" 00:14:19.834 } 00:14:19.834 } 00:14:19.834 ]' 00:14:19.834 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.834 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:19.834 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:20.091 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:20.091 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:20.091 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.091 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.092 18:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.349 18:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:14:20.349 18:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:14:21.281 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.281 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:21.281 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.281 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.281 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.281 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:21.281 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:21.281 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:21.539 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:21.539 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:21.539 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:21.539 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:21.539 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:21.539 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.539 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.539 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.539 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.539 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.539 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.539 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.539 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.797 00:14:21.797 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.797 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.797 18:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:22.054 18:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.054 18:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.054 18:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.054 18:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.311 18:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.311 18:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:22.311 { 00:14:22.311 "cntlid": 21, 00:14:22.311 "qid": 0, 00:14:22.311 "state": "enabled", 00:14:22.311 "thread": "nvmf_tgt_poll_group_000", 00:14:22.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:22.311 "listen_address": { 00:14:22.311 "trtype": "TCP", 00:14:22.311 "adrfam": "IPv4", 00:14:22.311 "traddr": "10.0.0.2", 00:14:22.311 "trsvcid": "4420" 00:14:22.311 }, 00:14:22.311 "peer_address": { 00:14:22.311 "trtype": "TCP", 00:14:22.311 "adrfam": "IPv4", 00:14:22.311 "traddr": "10.0.0.1", 00:14:22.311 "trsvcid": "37086" 00:14:22.311 }, 00:14:22.311 "auth": { 00:14:22.311 "state": "completed", 00:14:22.311 "digest": "sha256", 00:14:22.311 "dhgroup": "ffdhe3072" 00:14:22.311 } 00:14:22.311 } 00:14:22.311 ]' 00:14:22.311 18:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:22.311 18:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.311 18:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:22.311 18:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:22.311 18:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:22.311 18:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.311 18:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.311 18:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.568 18:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:14:22.568 18:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:14:23.502 18:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.502 18:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:23.502 18:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.502 18:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.502 18:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.502 18:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.502 18:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:23.502 18:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:23.760 18:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:23.760 18:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.760 18:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:23.760 18:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:23.760 18:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:23.760 18:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.760 18:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:23.760 18:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.760 18:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.760 18:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.760 18:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:23.760 18:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:23.760 18:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:24.325 00:14:24.325 18:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.325 18:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.325 18:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.583 18:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.583 18:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.583 18:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.583 18:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.583 18:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.583 18:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.583 { 00:14:24.583 "cntlid": 23, 00:14:24.583 "qid": 0, 00:14:24.583 "state": "enabled", 00:14:24.583 "thread": "nvmf_tgt_poll_group_000", 00:14:24.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:24.583 "listen_address": { 00:14:24.583 "trtype": "TCP", 00:14:24.583 "adrfam": "IPv4", 00:14:24.583 "traddr": "10.0.0.2", 00:14:24.583 "trsvcid": "4420" 00:14:24.583 }, 00:14:24.583 "peer_address": { 00:14:24.583 "trtype": "TCP", 00:14:24.583 "adrfam": "IPv4", 00:14:24.583 "traddr": "10.0.0.1", 00:14:24.583 "trsvcid": "37108" 00:14:24.583 }, 00:14:24.583 "auth": { 00:14:24.583 "state": "completed", 00:14:24.583 "digest": "sha256", 00:14:24.583 "dhgroup": "ffdhe3072" 00:14:24.583 } 00:14:24.583 } 00:14:24.583 ]' 00:14:24.583 18:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.583 18:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.583 18:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.583 18:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:24.583 18:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.583 18:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.583 18:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.583 18:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.841 18:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:14:24.841 18:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:14:25.773 18:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.774 18:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:25.774 18:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.774 18:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.774 18:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.774 18:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:25.774 18:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.774 18:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:25.774 18:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:26.031 18:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:26.032 18:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:26.032 18:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:26.032 18:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:26.032 18:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:26.032 18:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.032 18:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.032 18:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.032 18:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.032 18:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.032 18:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.032 18:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.032 18:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.597 00:14:26.597 18:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.597 18:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.597 18:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.855 18:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.855 18:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.855 18:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.855 18:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.855 18:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.855 18:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.855 { 00:14:26.855 "cntlid": 25, 00:14:26.855 "qid": 0, 00:14:26.855 "state": "enabled", 00:14:26.855 "thread": "nvmf_tgt_poll_group_000", 00:14:26.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:26.855 "listen_address": { 00:14:26.855 "trtype": "TCP", 00:14:26.855 "adrfam": "IPv4", 00:14:26.855 "traddr": "10.0.0.2", 00:14:26.855 "trsvcid": "4420" 00:14:26.855 }, 00:14:26.855 "peer_address": { 00:14:26.855 "trtype": "TCP", 00:14:26.855 "adrfam": "IPv4", 00:14:26.855 "traddr": "10.0.0.1", 00:14:26.855 "trsvcid": "37142" 00:14:26.855 }, 00:14:26.855 "auth": { 00:14:26.855 "state": "completed", 00:14:26.855 "digest": "sha256", 00:14:26.855 "dhgroup": "ffdhe4096" 00:14:26.855 } 00:14:26.855 } 00:14:26.855 ]' 00:14:26.855 18:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.855 18:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.855 18:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.855 18:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:26.855 18:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.855 18:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.855 18:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.855 18:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.113 18:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:14:27.113 18:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:14:28.046 18:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.046 18:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.046 18:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.046 18:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.046 18:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.046 18:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:28.046 18:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:28.046 18:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:28.304 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:28.304 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:28.304 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:28.304 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:28.304 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:28.304 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.304 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.304 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.304 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.304 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.304 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.304 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.304 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.869 00:14:28.869 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:28.869 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:28.869 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.127 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.127 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.127 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.127 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.127 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.127 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.127 { 00:14:29.127 "cntlid": 27, 00:14:29.127 "qid": 0, 00:14:29.127 "state": "enabled", 00:14:29.127 "thread": "nvmf_tgt_poll_group_000", 00:14:29.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:29.127 "listen_address": { 00:14:29.127 "trtype": "TCP", 00:14:29.127 "adrfam": "IPv4", 00:14:29.127 "traddr": "10.0.0.2", 00:14:29.127 "trsvcid": "4420" 00:14:29.127 }, 00:14:29.127 "peer_address": { 00:14:29.127 "trtype": "TCP", 00:14:29.127 "adrfam": "IPv4", 00:14:29.127 "traddr": "10.0.0.1", 00:14:29.127 "trsvcid": "37164" 00:14:29.127 }, 00:14:29.127 "auth": { 00:14:29.127 "state": "completed", 00:14:29.127 "digest": "sha256", 00:14:29.127 "dhgroup": "ffdhe4096" 00:14:29.127 } 00:14:29.127 } 00:14:29.127 ]' 00:14:29.127 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.127 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:29.127 18:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.127 18:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:29.127 18:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:29.127 18:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.127 18:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.127 18:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.385 18:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:14:29.385 18:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:14:30.317 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.317 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:30.317 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.317 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.317 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.317 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.318 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:30.318 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:30.576 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:30.576 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.576 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:30.576 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:30.576 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:30.576 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.576 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.576 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.576 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.576 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.576 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.576 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.576 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.836 00:14:31.125 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:31.125 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:31.125 18:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.407 18:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.407 18:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.407 18:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.407 18:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.407 18:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.407 18:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.407 { 00:14:31.407 "cntlid": 29, 00:14:31.407 "qid": 0, 00:14:31.407 "state": "enabled", 00:14:31.407 "thread": "nvmf_tgt_poll_group_000", 00:14:31.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:31.407 "listen_address": { 00:14:31.407 "trtype": "TCP", 00:14:31.407 "adrfam": "IPv4", 00:14:31.407 "traddr": "10.0.0.2", 00:14:31.407 "trsvcid": "4420" 00:14:31.407 }, 00:14:31.407 "peer_address": { 00:14:31.407 "trtype": "TCP", 00:14:31.407 "adrfam": "IPv4", 00:14:31.407 "traddr": "10.0.0.1", 00:14:31.407 "trsvcid": "53342" 00:14:31.407 }, 00:14:31.407 "auth": { 00:14:31.407 "state": "completed", 00:14:31.407 "digest": "sha256", 00:14:31.407 "dhgroup": "ffdhe4096" 00:14:31.407 } 00:14:31.407 } 00:14:31.407 ]' 00:14:31.407 18:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.407 18:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:31.407 18:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.407 18:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:31.407 18:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.407 18:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.407 18:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.407 18:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.665 18:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:14:31.665 18:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:14:32.600 18:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.600 18:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:32.600 18:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.600 18:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.600 18:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.600 18:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:32.600 18:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:32.600 18:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:32.858 18:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:32.858 18:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.858 18:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:32.858 18:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:32.858 18:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:32.858 18:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.858 18:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:32.858 18:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.858 18:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.858 18:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.858 18:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:32.858 18:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:32.858 18:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:33.117 00:14:33.117 18:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:33.117 18:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:33.117 18:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.375 18:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.375 18:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.375 18:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.375 18:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.375 18:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.375 18:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.375 { 00:14:33.375 "cntlid": 31, 00:14:33.375 "qid": 0, 00:14:33.375 "state": "enabled", 00:14:33.375 "thread": "nvmf_tgt_poll_group_000", 00:14:33.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:33.375 "listen_address": { 00:14:33.375 "trtype": "TCP", 00:14:33.375 "adrfam": "IPv4", 00:14:33.375 "traddr": "10.0.0.2", 00:14:33.375 "trsvcid": "4420" 00:14:33.375 }, 00:14:33.375 "peer_address": { 00:14:33.375 "trtype": "TCP", 00:14:33.375 "adrfam": "IPv4", 00:14:33.375 "traddr": "10.0.0.1", 00:14:33.375 "trsvcid": "53368" 00:14:33.375 }, 00:14:33.375 "auth": { 00:14:33.375 "state": "completed", 00:14:33.375 "digest": "sha256", 00:14:33.375 "dhgroup": "ffdhe4096" 00:14:33.375 } 00:14:33.375 } 00:14:33.375 ]' 00:14:33.375 18:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.633 18:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.633 18:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.633 18:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:33.633 18:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.633 18:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.633 18:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.633 18:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.892 18:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:14:33.892 18:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:14:34.826 18:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.826 18:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.826 18:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.826 18:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.826 18:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.826 18:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:34.826 18:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.826 18:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:34.826 18:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:35.085 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:35.085 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.085 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:35.085 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:35.085 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:35.085 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.085 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.085 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.085 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.085 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.085 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.085 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.085 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.651 00:14:35.651 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.651 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.651 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.909 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.909 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.909 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.909 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.909 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.909 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.909 { 00:14:35.909 "cntlid": 33, 00:14:35.909 "qid": 0, 00:14:35.909 "state": "enabled", 00:14:35.909 "thread": "nvmf_tgt_poll_group_000", 00:14:35.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:35.909 "listen_address": { 00:14:35.909 "trtype": "TCP", 00:14:35.909 "adrfam": "IPv4", 00:14:35.909 "traddr": "10.0.0.2", 00:14:35.909 "trsvcid": "4420" 00:14:35.909 }, 00:14:35.909 "peer_address": { 00:14:35.909 "trtype": "TCP", 00:14:35.909 "adrfam": "IPv4", 00:14:35.909 "traddr": "10.0.0.1", 00:14:35.909 "trsvcid": "53388" 00:14:35.909 }, 00:14:35.909 "auth": { 00:14:35.909 "state": "completed", 00:14:35.909 "digest": "sha256", 00:14:35.909 "dhgroup": "ffdhe6144" 00:14:35.909 } 00:14:35.909 } 00:14:35.909 ]' 00:14:35.909 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.909 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.909 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.168 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:36.168 18:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:36.168 18:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.168 18:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.168 18:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.426 18:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:14:36.426 18:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:14:37.359 18:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.359 18:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:37.359 18:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.359 18:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.359 18:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.359 18:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:37.359 18:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:37.359 18:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:37.617 18:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:37.617 18:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.617 18:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:37.617 18:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:37.617 18:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:37.617 18:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.618 18:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.618 18:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.618 18:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.618 18:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.618 18:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.618 18:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.618 18:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.184 00:14:38.184 18:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:38.184 18:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:38.184 18:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.441 18:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.441 18:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.441 18:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.441 18:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.441 18:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.441 18:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:38.441 { 00:14:38.441 "cntlid": 35, 00:14:38.441 "qid": 0, 00:14:38.441 "state": "enabled", 00:14:38.441 "thread": "nvmf_tgt_poll_group_000", 00:14:38.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:38.441 "listen_address": { 00:14:38.441 "trtype": "TCP", 00:14:38.441 "adrfam": "IPv4", 00:14:38.441 "traddr": "10.0.0.2", 00:14:38.441 "trsvcid": "4420" 00:14:38.441 }, 00:14:38.441 "peer_address": { 00:14:38.441 "trtype": "TCP", 00:14:38.441 "adrfam": "IPv4", 00:14:38.441 "traddr": "10.0.0.1", 00:14:38.441 "trsvcid": "53428" 00:14:38.441 }, 00:14:38.441 "auth": { 00:14:38.441 "state": "completed", 00:14:38.441 "digest": "sha256", 00:14:38.441 "dhgroup": "ffdhe6144" 00:14:38.441 } 00:14:38.441 } 00:14:38.441 ]' 00:14:38.441 18:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:38.441 18:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:38.441 18:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.441 18:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:38.441 18:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.699 18:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.699 18:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.699 18:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.957 18:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:14:38.957 18:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:14:39.893 18:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.893 18:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:39.893 18:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.893 18:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.893 18:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.893 18:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.893 18:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:39.893 18:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:40.151 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:40.151 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.151 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:40.151 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:40.151 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:40.151 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.151 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.151 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.151 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.151 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.152 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.152 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.152 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.719 00:14:40.719 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:40.719 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:40.719 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.977 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.977 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.977 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.977 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.977 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.977 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.977 { 00:14:40.977 "cntlid": 37, 00:14:40.977 "qid": 0, 00:14:40.977 "state": "enabled", 00:14:40.977 "thread": "nvmf_tgt_poll_group_000", 00:14:40.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:40.977 "listen_address": { 00:14:40.977 "trtype": "TCP", 00:14:40.977 "adrfam": "IPv4", 00:14:40.977 "traddr": "10.0.0.2", 00:14:40.977 "trsvcid": "4420" 00:14:40.977 }, 00:14:40.977 "peer_address": { 00:14:40.977 "trtype": "TCP", 00:14:40.977 "adrfam": "IPv4", 00:14:40.978 "traddr": "10.0.0.1", 00:14:40.978 "trsvcid": "54172" 00:14:40.978 }, 00:14:40.978 "auth": { 00:14:40.978 "state": "completed", 00:14:40.978 "digest": "sha256", 00:14:40.978 "dhgroup": "ffdhe6144" 00:14:40.978 } 00:14:40.978 } 00:14:40.978 ]' 00:14:40.978 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.978 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.978 18:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:41.236 18:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:41.236 18:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:41.236 18:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.236 18:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.236 18:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.494 18:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:14:41.494 18:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:14:42.427 18:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.427 18:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:42.427 18:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.427 18:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.427 18:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.427 18:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.427 18:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:42.427 18:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:42.685 18:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:42.685 18:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:42.685 18:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:42.685 18:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:42.685 18:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:42.685 18:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.685 18:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:42.685 18:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.685 18:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.685 18:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.685 18:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:42.685 18:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:42.685 18:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:43.251 00:14:43.251 18:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.251 18:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.251 18:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.509 18:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.509 18:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.509 18:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.509 18:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.509 18:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.509 18:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.509 { 00:14:43.509 "cntlid": 39, 00:14:43.509 "qid": 0, 00:14:43.509 "state": "enabled", 00:14:43.509 "thread": "nvmf_tgt_poll_group_000", 00:14:43.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:43.509 "listen_address": { 00:14:43.510 "trtype": "TCP", 00:14:43.510 "adrfam": "IPv4", 00:14:43.510 "traddr": "10.0.0.2", 00:14:43.510 "trsvcid": "4420" 00:14:43.510 }, 00:14:43.510 "peer_address": { 00:14:43.510 "trtype": "TCP", 00:14:43.510 "adrfam": "IPv4", 00:14:43.510 "traddr": "10.0.0.1", 00:14:43.510 "trsvcid": "54194" 00:14:43.510 }, 00:14:43.510 "auth": { 00:14:43.510 "state": "completed", 00:14:43.510 "digest": "sha256", 00:14:43.510 "dhgroup": "ffdhe6144" 00:14:43.510 } 00:14:43.510 } 00:14:43.510 ]' 00:14:43.510 18:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:43.768 18:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.768 18:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:43.768 18:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:43.768 18:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:43.768 18:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.768 18:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.768 18:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.026 18:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:14:44.026 18:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:14:44.960 18:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.960 18:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:44.960 18:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.960 18:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.960 18:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.960 18:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:44.960 18:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:44.960 18:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:44.960 18:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:45.218 18:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:14:45.218 18:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.218 18:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:45.218 18:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:45.218 18:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:45.218 18:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.218 18:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.218 18:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.218 18:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.218 18:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.218 18:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.218 18:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.218 18:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.154 00:14:46.154 18:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.154 18:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.154 18:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.412 18:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.412 18:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.412 18:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.412 18:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.412 18:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.412 18:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.412 { 00:14:46.412 "cntlid": 41, 00:14:46.412 "qid": 0, 00:14:46.412 "state": "enabled", 00:14:46.412 "thread": "nvmf_tgt_poll_group_000", 00:14:46.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:46.412 "listen_address": { 00:14:46.412 "trtype": "TCP", 00:14:46.412 "adrfam": "IPv4", 00:14:46.412 "traddr": "10.0.0.2", 00:14:46.412 "trsvcid": "4420" 00:14:46.412 }, 00:14:46.412 "peer_address": { 00:14:46.412 "trtype": "TCP", 00:14:46.412 "adrfam": "IPv4", 00:14:46.412 "traddr": "10.0.0.1", 00:14:46.412 "trsvcid": "54222" 00:14:46.412 }, 00:14:46.412 "auth": { 00:14:46.412 "state": "completed", 00:14:46.412 "digest": "sha256", 00:14:46.412 "dhgroup": "ffdhe8192" 00:14:46.412 } 00:14:46.412 } 00:14:46.412 ]' 00:14:46.412 18:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.412 18:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.412 18:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.412 18:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:46.412 18:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.412 18:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.412 18:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.412 18:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.671 18:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:14:46.671 18:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:14:47.608 18:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.608 18:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:47.608 18:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.608 18:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.608 18:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.608 18:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.608 18:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:47.608 18:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:47.867 18:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:14:47.867 18:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.867 18:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:47.867 18:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:47.867 18:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:47.867 18:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.867 18:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.867 18:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.867 18:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.867 18:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.867 18:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.867 18:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.867 18:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.804 00:14:48.804 18:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.804 18:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.804 18:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.063 18:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.063 18:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.063 18:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.063 18:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.063 18:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.063 18:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.063 { 00:14:49.063 "cntlid": 43, 00:14:49.063 "qid": 0, 00:14:49.063 "state": "enabled", 00:14:49.063 "thread": "nvmf_tgt_poll_group_000", 00:14:49.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:49.063 "listen_address": { 00:14:49.063 "trtype": "TCP", 00:14:49.063 "adrfam": "IPv4", 00:14:49.063 "traddr": "10.0.0.2", 00:14:49.063 "trsvcid": "4420" 00:14:49.063 }, 00:14:49.063 "peer_address": { 00:14:49.063 "trtype": "TCP", 00:14:49.063 "adrfam": "IPv4", 00:14:49.063 "traddr": "10.0.0.1", 00:14:49.063 "trsvcid": "54258" 00:14:49.063 }, 00:14:49.063 "auth": { 00:14:49.063 "state": "completed", 00:14:49.063 "digest": "sha256", 00:14:49.063 "dhgroup": "ffdhe8192" 00:14:49.063 } 00:14:49.063 } 00:14:49.063 ]' 00:14:49.063 18:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.063 18:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.063 18:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.321 18:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:49.321 18:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.321 18:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.321 18:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.321 18:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.579 18:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:14:49.579 18:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:14:50.517 18:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.517 18:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:50.517 18:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.517 18:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.517 18:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.517 18:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:50.517 18:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:50.517 18:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:50.776 18:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:14:50.776 18:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.776 18:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:50.776 18:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:50.776 18:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:50.776 18:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.776 18:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.776 18:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.776 18:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.776 18:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.776 18:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.776 18:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.776 18:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.715 00:14:51.715 18:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.715 18:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.715 18:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.973 18:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.973 18:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.973 18:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.973 18:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.973 18:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.973 18:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.973 { 00:14:51.973 "cntlid": 45, 00:14:51.973 "qid": 0, 00:14:51.973 "state": "enabled", 00:14:51.973 "thread": "nvmf_tgt_poll_group_000", 00:14:51.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:51.973 "listen_address": { 00:14:51.973 "trtype": "TCP", 00:14:51.973 "adrfam": "IPv4", 00:14:51.973 "traddr": "10.0.0.2", 00:14:51.973 "trsvcid": "4420" 00:14:51.973 }, 00:14:51.973 "peer_address": { 00:14:51.973 "trtype": "TCP", 00:14:51.973 "adrfam": "IPv4", 00:14:51.973 "traddr": "10.0.0.1", 00:14:51.973 "trsvcid": "55190" 00:14:51.973 }, 00:14:51.973 "auth": { 00:14:51.973 "state": "completed", 00:14:51.973 "digest": "sha256", 00:14:51.973 "dhgroup": "ffdhe8192" 00:14:51.973 } 00:14:51.973 } 00:14:51.973 ]' 00:14:51.973 18:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.973 18:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:51.974 18:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.974 18:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:51.974 18:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.974 18:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.974 18:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.974 18:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.550 18:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:14:52.550 18:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:14:53.488 18:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.488 18:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:53.488 18:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.488 18:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.488 18:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.488 18:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.488 18:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:53.488 18:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:53.488 18:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:14:53.488 18:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.488 18:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:53.488 18:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:53.488 18:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:53.488 18:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.489 18:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:53.489 18:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.489 18:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.489 18:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.489 18:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:53.489 18:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:53.489 18:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:54.429 00:14:54.429 18:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.429 18:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.429 18:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.687 18:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.687 18:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.687 18:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.687 18:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.687 18:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.687 18:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.687 { 00:14:54.687 "cntlid": 47, 00:14:54.687 "qid": 0, 00:14:54.687 "state": "enabled", 00:14:54.687 "thread": "nvmf_tgt_poll_group_000", 00:14:54.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:54.687 "listen_address": { 00:14:54.687 "trtype": "TCP", 00:14:54.687 "adrfam": "IPv4", 00:14:54.687 "traddr": "10.0.0.2", 00:14:54.687 "trsvcid": "4420" 00:14:54.687 }, 00:14:54.687 "peer_address": { 00:14:54.687 "trtype": "TCP", 00:14:54.687 "adrfam": "IPv4", 00:14:54.687 "traddr": "10.0.0.1", 00:14:54.687 "trsvcid": "55208" 00:14:54.687 }, 00:14:54.687 "auth": { 00:14:54.687 "state": "completed", 00:14:54.687 "digest": "sha256", 00:14:54.687 "dhgroup": "ffdhe8192" 00:14:54.687 } 00:14:54.687 } 00:14:54.687 ]' 00:14:54.687 18:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.687 18:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.687 18:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.945 18:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:54.945 18:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.946 18:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.946 18:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.946 18:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.202 18:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:14:55.202 18:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:14:56.135 18:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.135 18:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:56.135 18:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.135 18:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.135 18:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.135 18:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:56.135 18:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:56.135 18:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.135 18:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:56.135 18:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:56.393 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:14:56.393 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.393 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:56.393 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:56.393 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:56.393 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.393 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.393 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.393 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.393 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.393 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.393 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.393 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.651 00:14:56.651 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.651 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.651 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.910 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.910 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.910 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.910 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.910 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.910 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.910 { 00:14:56.910 "cntlid": 49, 00:14:56.910 "qid": 0, 00:14:56.910 "state": "enabled", 00:14:56.910 "thread": "nvmf_tgt_poll_group_000", 00:14:56.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:56.910 "listen_address": { 00:14:56.910 "trtype": "TCP", 00:14:56.910 "adrfam": "IPv4", 00:14:56.910 "traddr": "10.0.0.2", 00:14:56.910 "trsvcid": "4420" 00:14:56.910 }, 00:14:56.910 "peer_address": { 00:14:56.910 "trtype": "TCP", 00:14:56.910 "adrfam": "IPv4", 00:14:56.910 "traddr": "10.0.0.1", 00:14:56.910 "trsvcid": "55238" 00:14:56.910 }, 00:14:56.910 "auth": { 00:14:56.910 "state": "completed", 00:14:56.910 "digest": "sha384", 00:14:56.910 "dhgroup": "null" 00:14:56.910 } 00:14:56.910 } 00:14:56.910 ]' 00:14:56.910 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.910 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:56.910 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.910 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:56.910 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.168 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.168 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.168 18:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.428 18:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:14:57.428 18:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:14:58.370 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.370 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:58.370 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.370 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.370 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.370 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.370 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:58.370 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:58.629 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:14:58.629 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.629 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:58.629 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:58.629 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:58.629 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.629 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.629 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.629 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.629 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.629 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.629 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.629 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.888 00:14:58.888 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.888 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.888 18:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.146 18:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.146 18:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.146 18:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.146 18:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.146 18:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.146 18:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.146 { 00:14:59.146 "cntlid": 51, 00:14:59.146 "qid": 0, 00:14:59.146 "state": "enabled", 00:14:59.146 "thread": "nvmf_tgt_poll_group_000", 00:14:59.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:59.146 "listen_address": { 00:14:59.146 "trtype": "TCP", 00:14:59.146 "adrfam": "IPv4", 00:14:59.146 "traddr": "10.0.0.2", 00:14:59.146 "trsvcid": "4420" 00:14:59.147 }, 00:14:59.147 "peer_address": { 00:14:59.147 "trtype": "TCP", 00:14:59.147 "adrfam": "IPv4", 00:14:59.147 "traddr": "10.0.0.1", 00:14:59.147 "trsvcid": "55260" 00:14:59.147 }, 00:14:59.147 "auth": { 00:14:59.147 "state": "completed", 00:14:59.147 "digest": "sha384", 00:14:59.147 "dhgroup": "null" 00:14:59.147 } 00:14:59.147 } 00:14:59.147 ]' 00:14:59.147 18:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.147 18:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:59.147 18:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.147 18:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:59.147 18:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.147 18:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.147 18:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.147 18:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.406 18:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:14:59.406 18:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:15:00.345 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.345 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:00.345 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.345 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.345 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.345 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.345 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:00.345 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:00.604 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:00.604 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.604 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:00.604 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:00.604 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:00.604 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.604 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.604 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.604 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.604 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.604 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.604 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.604 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.864 00:15:01.129 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.129 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.129 18:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.445 18:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.445 18:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.445 18:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.445 18:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.445 18:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.445 18:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.445 { 00:15:01.445 "cntlid": 53, 00:15:01.445 "qid": 0, 00:15:01.445 "state": "enabled", 00:15:01.445 "thread": "nvmf_tgt_poll_group_000", 00:15:01.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:01.445 "listen_address": { 00:15:01.445 "trtype": "TCP", 00:15:01.445 "adrfam": "IPv4", 00:15:01.445 "traddr": "10.0.0.2", 00:15:01.445 "trsvcid": "4420" 00:15:01.445 }, 00:15:01.445 "peer_address": { 00:15:01.445 "trtype": "TCP", 00:15:01.445 "adrfam": "IPv4", 00:15:01.445 "traddr": "10.0.0.1", 00:15:01.445 "trsvcid": "50894" 00:15:01.445 }, 00:15:01.445 "auth": { 00:15:01.445 "state": "completed", 00:15:01.445 "digest": "sha384", 00:15:01.445 "dhgroup": "null" 00:15:01.445 } 00:15:01.445 } 00:15:01.445 ]' 00:15:01.445 18:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.445 18:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:01.445 18:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.445 18:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:01.445 18:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.445 18:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.445 18:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.445 18:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.731 18:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:15:01.731 18:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:15:02.671 18:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.671 18:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:02.671 18:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.671 18:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.671 18:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.671 18:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.671 18:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:02.671 18:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:02.929 18:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:02.929 18:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.929 18:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:02.929 18:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:02.929 18:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:02.929 18:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.929 18:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:02.929 18:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.929 18:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.929 18:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.929 18:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:02.929 18:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.930 18:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:03.188 00:15:03.188 18:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.188 18:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.188 18:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.446 18:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.446 18:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.446 18:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.446 18:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.446 18:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.446 18:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.446 { 00:15:03.446 "cntlid": 55, 00:15:03.446 "qid": 0, 00:15:03.446 "state": "enabled", 00:15:03.446 "thread": "nvmf_tgt_poll_group_000", 00:15:03.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:03.446 "listen_address": { 00:15:03.446 "trtype": "TCP", 00:15:03.446 "adrfam": "IPv4", 00:15:03.446 "traddr": "10.0.0.2", 00:15:03.446 "trsvcid": "4420" 00:15:03.446 }, 00:15:03.446 "peer_address": { 00:15:03.446 "trtype": "TCP", 00:15:03.446 "adrfam": "IPv4", 00:15:03.446 "traddr": "10.0.0.1", 00:15:03.446 "trsvcid": "50924" 00:15:03.446 }, 00:15:03.446 "auth": { 00:15:03.446 "state": "completed", 00:15:03.446 "digest": "sha384", 00:15:03.446 "dhgroup": "null" 00:15:03.446 } 00:15:03.446 } 00:15:03.446 ]' 00:15:03.446 18:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.446 18:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:03.446 18:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.704 18:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:03.704 18:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.704 18:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.704 18:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.704 18:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.962 18:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:15:03.962 18:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:15:04.898 18:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.898 18:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.898 18:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.898 18:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.898 18:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.898 18:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:04.898 18:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.898 18:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:04.898 18:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:05.157 18:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:05.157 18:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.157 18:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:05.157 18:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:05.157 18:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:05.157 18:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.157 18:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.157 18:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.157 18:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.157 18:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.157 18:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.157 18:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.157 18:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.415 00:15:05.415 18:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.415 18:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.415 18:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.673 18:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.673 18:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.673 18:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.673 18:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.673 18:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.673 18:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.673 { 00:15:05.673 "cntlid": 57, 00:15:05.673 "qid": 0, 00:15:05.673 "state": "enabled", 00:15:05.673 "thread": "nvmf_tgt_poll_group_000", 00:15:05.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:05.673 "listen_address": { 00:15:05.673 "trtype": "TCP", 00:15:05.673 "adrfam": "IPv4", 00:15:05.673 "traddr": "10.0.0.2", 00:15:05.673 "trsvcid": "4420" 00:15:05.673 }, 00:15:05.673 "peer_address": { 00:15:05.673 "trtype": "TCP", 00:15:05.673 "adrfam": "IPv4", 00:15:05.673 "traddr": "10.0.0.1", 00:15:05.673 "trsvcid": "50966" 00:15:05.673 }, 00:15:05.673 "auth": { 00:15:05.673 "state": "completed", 00:15:05.673 "digest": "sha384", 00:15:05.673 "dhgroup": "ffdhe2048" 00:15:05.673 } 00:15:05.673 } 00:15:05.673 ]' 00:15:05.673 18:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.673 18:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:05.673 18:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.673 18:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:05.673 18:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.931 18:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.931 18:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.931 18:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.189 18:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:15:06.189 18:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:15:07.129 18:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.129 18:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.129 18:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.129 18:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.129 18:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.129 18:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.129 18:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:07.129 18:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:07.129 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:07.129 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.129 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:07.129 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:07.129 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:07.129 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.129 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.129 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.129 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.387 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.387 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.387 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.387 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.645 00:15:07.645 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.645 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.645 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.903 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.903 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.903 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.903 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.903 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.903 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.903 { 00:15:07.903 "cntlid": 59, 00:15:07.903 "qid": 0, 00:15:07.903 "state": "enabled", 00:15:07.903 "thread": "nvmf_tgt_poll_group_000", 00:15:07.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:07.903 "listen_address": { 00:15:07.903 "trtype": "TCP", 00:15:07.903 "adrfam": "IPv4", 00:15:07.903 "traddr": "10.0.0.2", 00:15:07.903 "trsvcid": "4420" 00:15:07.903 }, 00:15:07.903 "peer_address": { 00:15:07.903 "trtype": "TCP", 00:15:07.903 "adrfam": "IPv4", 00:15:07.903 "traddr": "10.0.0.1", 00:15:07.903 "trsvcid": "50994" 00:15:07.903 }, 00:15:07.903 "auth": { 00:15:07.903 "state": "completed", 00:15:07.903 "digest": "sha384", 00:15:07.903 "dhgroup": "ffdhe2048" 00:15:07.903 } 00:15:07.903 } 00:15:07.903 ]' 00:15:07.903 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.903 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:07.903 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.903 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:07.903 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.903 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.903 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.903 18:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.472 18:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:15:08.472 18:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:15:09.038 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.297 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:09.297 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.297 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.297 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.297 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.297 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:09.297 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:09.555 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:09.555 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.555 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:09.555 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:09.555 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:09.555 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.555 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.555 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.555 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.555 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.555 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.555 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.555 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.813 00:15:09.813 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.813 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.813 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.071 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.071 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.071 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.071 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.071 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.071 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.071 { 00:15:10.071 "cntlid": 61, 00:15:10.071 "qid": 0, 00:15:10.071 "state": "enabled", 00:15:10.071 "thread": "nvmf_tgt_poll_group_000", 00:15:10.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:10.071 "listen_address": { 00:15:10.071 "trtype": "TCP", 00:15:10.071 "adrfam": "IPv4", 00:15:10.071 "traddr": "10.0.0.2", 00:15:10.071 "trsvcid": "4420" 00:15:10.071 }, 00:15:10.071 "peer_address": { 00:15:10.071 "trtype": "TCP", 00:15:10.071 "adrfam": "IPv4", 00:15:10.071 "traddr": "10.0.0.1", 00:15:10.071 "trsvcid": "36102" 00:15:10.071 }, 00:15:10.071 "auth": { 00:15:10.071 "state": "completed", 00:15:10.071 "digest": "sha384", 00:15:10.071 "dhgroup": "ffdhe2048" 00:15:10.071 } 00:15:10.071 } 00:15:10.071 ]' 00:15:10.071 18:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.071 18:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.071 18:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.071 18:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:10.071 18:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.329 18:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.329 18:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.329 18:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.588 18:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:15:10.588 18:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:15:11.525 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.525 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:11.525 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.525 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.525 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.525 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.525 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:11.526 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:11.784 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:11.784 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.784 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:11.784 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:11.784 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:11.784 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.784 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:11.784 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.784 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.784 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.784 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:11.784 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:11.784 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:12.042 00:15:12.042 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.042 18:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.042 18:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.299 18:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.299 18:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.299 18:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.299 18:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.299 18:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.299 18:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.299 { 00:15:12.299 "cntlid": 63, 00:15:12.299 "qid": 0, 00:15:12.299 "state": "enabled", 00:15:12.299 "thread": "nvmf_tgt_poll_group_000", 00:15:12.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:12.299 "listen_address": { 00:15:12.299 "trtype": "TCP", 00:15:12.299 "adrfam": "IPv4", 00:15:12.299 "traddr": "10.0.0.2", 00:15:12.299 "trsvcid": "4420" 00:15:12.299 }, 00:15:12.299 "peer_address": { 00:15:12.299 "trtype": "TCP", 00:15:12.299 "adrfam": "IPv4", 00:15:12.299 "traddr": "10.0.0.1", 00:15:12.299 "trsvcid": "36134" 00:15:12.299 }, 00:15:12.299 "auth": { 00:15:12.299 "state": "completed", 00:15:12.299 "digest": "sha384", 00:15:12.299 "dhgroup": "ffdhe2048" 00:15:12.299 } 00:15:12.299 } 00:15:12.299 ]' 00:15:12.299 18:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.299 18:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.299 18:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.557 18:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:12.557 18:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.557 18:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.557 18:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.557 18:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.815 18:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:15:12.815 18:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:15:13.752 18:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.752 18:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:13.752 18:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.752 18:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.752 18:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.752 18:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:13.752 18:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.752 18:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:13.752 18:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:14.010 18:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:14.010 18:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.010 18:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:14.010 18:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:14.010 18:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:14.010 18:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.010 18:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.010 18:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.010 18:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.010 18:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.010 18:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.010 18:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.010 18:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.269 00:15:14.269 18:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.269 18:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.269 18:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.527 18:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.527 18:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.527 18:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.527 18:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.527 18:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.527 18:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.527 { 00:15:14.527 "cntlid": 65, 00:15:14.527 "qid": 0, 00:15:14.527 "state": "enabled", 00:15:14.527 "thread": "nvmf_tgt_poll_group_000", 00:15:14.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:14.527 "listen_address": { 00:15:14.527 "trtype": "TCP", 00:15:14.527 "adrfam": "IPv4", 00:15:14.527 "traddr": "10.0.0.2", 00:15:14.527 "trsvcid": "4420" 00:15:14.527 }, 00:15:14.527 "peer_address": { 00:15:14.527 "trtype": "TCP", 00:15:14.527 "adrfam": "IPv4", 00:15:14.527 "traddr": "10.0.0.1", 00:15:14.527 "trsvcid": "36172" 00:15:14.527 }, 00:15:14.527 "auth": { 00:15:14.527 "state": "completed", 00:15:14.527 "digest": "sha384", 00:15:14.527 "dhgroup": "ffdhe3072" 00:15:14.527 } 00:15:14.527 } 00:15:14.527 ]' 00:15:14.527 18:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.785 18:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.785 18:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.785 18:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:14.785 18:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.785 18:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.785 18:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.785 18:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.043 18:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:15:15.043 18:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:15:15.981 18:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.981 18:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:15.981 18:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.981 18:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.981 18:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.981 18:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.981 18:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:15.981 18:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:16.242 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:16.242 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.242 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:16.242 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:16.242 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:16.242 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.242 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.242 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.242 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.242 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.242 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.242 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.242 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.501 00:15:16.501 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.501 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.501 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.760 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.019 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.019 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.019 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.019 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.019 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.019 { 00:15:17.019 "cntlid": 67, 00:15:17.019 "qid": 0, 00:15:17.019 "state": "enabled", 00:15:17.019 "thread": "nvmf_tgt_poll_group_000", 00:15:17.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:17.019 "listen_address": { 00:15:17.019 "trtype": "TCP", 00:15:17.019 "adrfam": "IPv4", 00:15:17.019 "traddr": "10.0.0.2", 00:15:17.019 "trsvcid": "4420" 00:15:17.019 }, 00:15:17.019 "peer_address": { 00:15:17.019 "trtype": "TCP", 00:15:17.019 "adrfam": "IPv4", 00:15:17.019 "traddr": "10.0.0.1", 00:15:17.019 "trsvcid": "36188" 00:15:17.019 }, 00:15:17.019 "auth": { 00:15:17.019 "state": "completed", 00:15:17.019 "digest": "sha384", 00:15:17.019 "dhgroup": "ffdhe3072" 00:15:17.019 } 00:15:17.019 } 00:15:17.019 ]' 00:15:17.019 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.019 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.019 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.019 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:17.019 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.019 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.019 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.019 18:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.278 18:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:15:17.278 18:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:15:18.214 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.214 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:18.214 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.214 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.214 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.214 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.214 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:18.214 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:18.472 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:18.472 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.472 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:18.472 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:18.472 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:18.472 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.472 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.472 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.472 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.472 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.472 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.472 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.472 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.731 00:15:18.989 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.989 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.989 18:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.248 18:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.248 18:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.248 18:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.248 18:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.248 18:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.248 18:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.248 { 00:15:19.248 "cntlid": 69, 00:15:19.248 "qid": 0, 00:15:19.248 "state": "enabled", 00:15:19.248 "thread": "nvmf_tgt_poll_group_000", 00:15:19.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:19.248 "listen_address": { 00:15:19.248 "trtype": "TCP", 00:15:19.248 "adrfam": "IPv4", 00:15:19.248 "traddr": "10.0.0.2", 00:15:19.248 "trsvcid": "4420" 00:15:19.248 }, 00:15:19.248 "peer_address": { 00:15:19.248 "trtype": "TCP", 00:15:19.248 "adrfam": "IPv4", 00:15:19.248 "traddr": "10.0.0.1", 00:15:19.248 "trsvcid": "36212" 00:15:19.248 }, 00:15:19.248 "auth": { 00:15:19.248 "state": "completed", 00:15:19.248 "digest": "sha384", 00:15:19.248 "dhgroup": "ffdhe3072" 00:15:19.248 } 00:15:19.248 } 00:15:19.248 ]' 00:15:19.248 18:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.248 18:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:19.248 18:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.248 18:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:19.248 18:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.248 18:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.248 18:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.248 18:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.507 18:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:15:19.507 18:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:15:20.441 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.441 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:20.441 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.441 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.441 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.441 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.441 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:20.441 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:20.699 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:20.699 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.699 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:20.699 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:20.699 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:20.699 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.699 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:20.699 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.699 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.699 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.699 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:20.699 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.699 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.957 00:15:20.957 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.957 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.957 18:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.523 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.523 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.523 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.523 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.523 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.523 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.523 { 00:15:21.523 "cntlid": 71, 00:15:21.523 "qid": 0, 00:15:21.523 "state": "enabled", 00:15:21.523 "thread": "nvmf_tgt_poll_group_000", 00:15:21.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:21.523 "listen_address": { 00:15:21.523 "trtype": "TCP", 00:15:21.523 "adrfam": "IPv4", 00:15:21.523 "traddr": "10.0.0.2", 00:15:21.523 "trsvcid": "4420" 00:15:21.523 }, 00:15:21.523 "peer_address": { 00:15:21.523 "trtype": "TCP", 00:15:21.523 "adrfam": "IPv4", 00:15:21.523 "traddr": "10.0.0.1", 00:15:21.523 "trsvcid": "47036" 00:15:21.523 }, 00:15:21.523 "auth": { 00:15:21.523 "state": "completed", 00:15:21.523 "digest": "sha384", 00:15:21.523 "dhgroup": "ffdhe3072" 00:15:21.523 } 00:15:21.523 } 00:15:21.523 ]' 00:15:21.523 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.523 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.523 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.523 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:21.523 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.523 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.523 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.523 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.781 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:15:21.781 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:15:22.714 18:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.714 18:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:22.714 18:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.714 18:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.714 18:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.714 18:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:22.714 18:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.714 18:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:22.714 18:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:22.972 18:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:22.972 18:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.972 18:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:22.972 18:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:22.972 18:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:22.972 18:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.972 18:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.972 18:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.972 18:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.972 18:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.972 18:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.972 18:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.972 18:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.230 00:15:23.230 18:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.230 18:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.230 18:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.488 18:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.488 18:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.488 18:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.488 18:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.488 18:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.488 18:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.488 { 00:15:23.488 "cntlid": 73, 00:15:23.488 "qid": 0, 00:15:23.488 "state": "enabled", 00:15:23.488 "thread": "nvmf_tgt_poll_group_000", 00:15:23.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:23.488 "listen_address": { 00:15:23.488 "trtype": "TCP", 00:15:23.488 "adrfam": "IPv4", 00:15:23.488 "traddr": "10.0.0.2", 00:15:23.488 "trsvcid": "4420" 00:15:23.488 }, 00:15:23.488 "peer_address": { 00:15:23.488 "trtype": "TCP", 00:15:23.488 "adrfam": "IPv4", 00:15:23.488 "traddr": "10.0.0.1", 00:15:23.488 "trsvcid": "47074" 00:15:23.488 }, 00:15:23.488 "auth": { 00:15:23.488 "state": "completed", 00:15:23.488 "digest": "sha384", 00:15:23.488 "dhgroup": "ffdhe4096" 00:15:23.488 } 00:15:23.488 } 00:15:23.488 ]' 00:15:23.488 18:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.746 18:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.746 18:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.746 18:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:23.746 18:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.746 18:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.746 18:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.746 18:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.004 18:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:15:24.004 18:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:15:24.939 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.939 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:24.939 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.939 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.939 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.939 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.939 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:24.939 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:25.197 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:25.197 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.197 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:25.197 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:25.197 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:25.197 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.197 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.197 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.197 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.197 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.197 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.197 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.197 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.764 00:15:25.764 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.764 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.764 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.022 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.022 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.022 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.022 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.022 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.022 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.022 { 00:15:26.022 "cntlid": 75, 00:15:26.022 "qid": 0, 00:15:26.022 "state": "enabled", 00:15:26.022 "thread": "nvmf_tgt_poll_group_000", 00:15:26.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:26.022 "listen_address": { 00:15:26.022 "trtype": "TCP", 00:15:26.022 "adrfam": "IPv4", 00:15:26.022 "traddr": "10.0.0.2", 00:15:26.022 "trsvcid": "4420" 00:15:26.022 }, 00:15:26.022 "peer_address": { 00:15:26.022 "trtype": "TCP", 00:15:26.022 "adrfam": "IPv4", 00:15:26.022 "traddr": "10.0.0.1", 00:15:26.022 "trsvcid": "47116" 00:15:26.022 }, 00:15:26.022 "auth": { 00:15:26.022 "state": "completed", 00:15:26.022 "digest": "sha384", 00:15:26.022 "dhgroup": "ffdhe4096" 00:15:26.022 } 00:15:26.022 } 00:15:26.022 ]' 00:15:26.022 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.022 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.022 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.022 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:26.022 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.022 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.022 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.022 18:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.280 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:15:26.280 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:15:27.215 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.215 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:27.215 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.215 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.215 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.215 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.215 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:27.215 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:27.473 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:27.473 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.473 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:27.473 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:27.473 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:27.473 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.473 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.473 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.473 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.473 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.473 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.473 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.473 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.039 00:15:28.039 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.039 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.039 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.297 18:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.297 18:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.297 18:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.297 18:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.297 18:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.297 18:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.297 { 00:15:28.297 "cntlid": 77, 00:15:28.297 "qid": 0, 00:15:28.297 "state": "enabled", 00:15:28.297 "thread": "nvmf_tgt_poll_group_000", 00:15:28.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:28.297 "listen_address": { 00:15:28.297 "trtype": "TCP", 00:15:28.297 "adrfam": "IPv4", 00:15:28.297 "traddr": "10.0.0.2", 00:15:28.297 "trsvcid": "4420" 00:15:28.297 }, 00:15:28.297 "peer_address": { 00:15:28.297 "trtype": "TCP", 00:15:28.297 "adrfam": "IPv4", 00:15:28.297 "traddr": "10.0.0.1", 00:15:28.297 "trsvcid": "47154" 00:15:28.297 }, 00:15:28.297 "auth": { 00:15:28.297 "state": "completed", 00:15:28.297 "digest": "sha384", 00:15:28.297 "dhgroup": "ffdhe4096" 00:15:28.297 } 00:15:28.297 } 00:15:28.297 ]' 00:15:28.297 18:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.297 18:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.297 18:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.297 18:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:28.297 18:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.297 18:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.297 18:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.297 18:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.556 18:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:15:28.556 18:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:15:29.489 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.490 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:29.490 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.490 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.490 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.490 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.490 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:29.490 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:29.747 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:29.748 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.748 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:29.748 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:29.748 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:29.748 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.748 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:29.748 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.748 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.748 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.748 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:29.748 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.748 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:30.314 00:15:30.314 18:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.314 18:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.314 18:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.314 18:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.572 18:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.572 18:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.572 18:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.572 18:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.572 18:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.572 { 00:15:30.572 "cntlid": 79, 00:15:30.572 "qid": 0, 00:15:30.573 "state": "enabled", 00:15:30.573 "thread": "nvmf_tgt_poll_group_000", 00:15:30.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:30.573 "listen_address": { 00:15:30.573 "trtype": "TCP", 00:15:30.573 "adrfam": "IPv4", 00:15:30.573 "traddr": "10.0.0.2", 00:15:30.573 "trsvcid": "4420" 00:15:30.573 }, 00:15:30.573 "peer_address": { 00:15:30.573 "trtype": "TCP", 00:15:30.573 "adrfam": "IPv4", 00:15:30.573 "traddr": "10.0.0.1", 00:15:30.573 "trsvcid": "59796" 00:15:30.573 }, 00:15:30.573 "auth": { 00:15:30.573 "state": "completed", 00:15:30.573 "digest": "sha384", 00:15:30.573 "dhgroup": "ffdhe4096" 00:15:30.573 } 00:15:30.573 } 00:15:30.573 ]' 00:15:30.573 18:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.573 18:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.573 18:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.573 18:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:30.573 18:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.573 18:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.573 18:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.573 18:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.837 18:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:15:30.837 18:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:15:31.855 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.855 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:31.855 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.855 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.855 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.855 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:31.855 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.855 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:31.855 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:32.114 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:32.114 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.114 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:32.114 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:32.114 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:32.114 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.114 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.114 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.114 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.114 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.114 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.114 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.114 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.684 00:15:32.684 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.684 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.684 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.941 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.941 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.941 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.941 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.941 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.941 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.941 { 00:15:32.941 "cntlid": 81, 00:15:32.941 "qid": 0, 00:15:32.941 "state": "enabled", 00:15:32.941 "thread": "nvmf_tgt_poll_group_000", 00:15:32.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:32.941 "listen_address": { 00:15:32.941 "trtype": "TCP", 00:15:32.941 "adrfam": "IPv4", 00:15:32.941 "traddr": "10.0.0.2", 00:15:32.941 "trsvcid": "4420" 00:15:32.941 }, 00:15:32.941 "peer_address": { 00:15:32.941 "trtype": "TCP", 00:15:32.941 "adrfam": "IPv4", 00:15:32.941 "traddr": "10.0.0.1", 00:15:32.941 "trsvcid": "59818" 00:15:32.941 }, 00:15:32.941 "auth": { 00:15:32.941 "state": "completed", 00:15:32.941 "digest": "sha384", 00:15:32.941 "dhgroup": "ffdhe6144" 00:15:32.941 } 00:15:32.941 } 00:15:32.941 ]' 00:15:32.941 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.941 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.941 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.941 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:32.941 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.941 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.941 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.941 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.510 18:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:15:33.510 18:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:15:34.446 18:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.446 18:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.446 18:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.446 18:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.446 18:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.446 18:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.446 18:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.446 18:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.446 18:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:34.446 18:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.446 18:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:34.446 18:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:34.446 18:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:34.446 18:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.446 18:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.446 18:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.446 18:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.446 18:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.446 18:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.446 18:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.446 18:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.013 00:15:35.013 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.013 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.013 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.271 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.529 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.529 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.529 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.529 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.529 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.529 { 00:15:35.529 "cntlid": 83, 00:15:35.529 "qid": 0, 00:15:35.529 "state": "enabled", 00:15:35.529 "thread": "nvmf_tgt_poll_group_000", 00:15:35.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:35.529 "listen_address": { 00:15:35.529 "trtype": "TCP", 00:15:35.529 "adrfam": "IPv4", 00:15:35.529 "traddr": "10.0.0.2", 00:15:35.529 "trsvcid": "4420" 00:15:35.529 }, 00:15:35.529 "peer_address": { 00:15:35.529 "trtype": "TCP", 00:15:35.529 "adrfam": "IPv4", 00:15:35.529 "traddr": "10.0.0.1", 00:15:35.529 "trsvcid": "59848" 00:15:35.529 }, 00:15:35.529 "auth": { 00:15:35.529 "state": "completed", 00:15:35.529 "digest": "sha384", 00:15:35.529 "dhgroup": "ffdhe6144" 00:15:35.529 } 00:15:35.529 } 00:15:35.530 ]' 00:15:35.530 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.530 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.530 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.530 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:35.530 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.530 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.530 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.530 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.787 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:15:35.788 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:15:36.728 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.728 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.728 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.728 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.728 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.728 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.728 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:36.728 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:36.987 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:36.987 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.987 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:36.987 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:36.987 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:36.987 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.987 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.987 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.987 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.987 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.987 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.987 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.987 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.554 00:15:37.554 18:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.554 18:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.554 18:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.812 18:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.812 18:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.812 18:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.812 18:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.812 18:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.812 18:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.812 { 00:15:37.812 "cntlid": 85, 00:15:37.812 "qid": 0, 00:15:37.812 "state": "enabled", 00:15:37.812 "thread": "nvmf_tgt_poll_group_000", 00:15:37.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:37.812 "listen_address": { 00:15:37.812 "trtype": "TCP", 00:15:37.812 "adrfam": "IPv4", 00:15:37.812 "traddr": "10.0.0.2", 00:15:37.812 "trsvcid": "4420" 00:15:37.812 }, 00:15:37.812 "peer_address": { 00:15:37.812 "trtype": "TCP", 00:15:37.812 "adrfam": "IPv4", 00:15:37.812 "traddr": "10.0.0.1", 00:15:37.812 "trsvcid": "59860" 00:15:37.812 }, 00:15:37.812 "auth": { 00:15:37.812 "state": "completed", 00:15:37.812 "digest": "sha384", 00:15:37.812 "dhgroup": "ffdhe6144" 00:15:37.812 } 00:15:37.812 } 00:15:37.812 ]' 00:15:37.812 18:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.070 18:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.070 18:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.070 18:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:38.070 18:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.070 18:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.070 18:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.070 18:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.328 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:15:38.328 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:15:39.263 18:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.263 18:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:39.263 18:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.263 18:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.263 18:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.263 18:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.263 18:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:39.263 18:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:39.521 18:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:39.521 18:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.521 18:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:39.521 18:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:39.521 18:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:39.521 18:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.521 18:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:39.521 18:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.521 18:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.521 18:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.521 18:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:39.521 18:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:39.521 18:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:40.089 00:15:40.089 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.089 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.089 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.654 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.654 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.654 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.654 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.654 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.654 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.654 { 00:15:40.654 "cntlid": 87, 00:15:40.654 "qid": 0, 00:15:40.654 "state": "enabled", 00:15:40.654 "thread": "nvmf_tgt_poll_group_000", 00:15:40.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:40.654 "listen_address": { 00:15:40.654 "trtype": "TCP", 00:15:40.654 "adrfam": "IPv4", 00:15:40.655 "traddr": "10.0.0.2", 00:15:40.655 "trsvcid": "4420" 00:15:40.655 }, 00:15:40.655 "peer_address": { 00:15:40.655 "trtype": "TCP", 00:15:40.655 "adrfam": "IPv4", 00:15:40.655 "traddr": "10.0.0.1", 00:15:40.655 "trsvcid": "36430" 00:15:40.655 }, 00:15:40.655 "auth": { 00:15:40.655 "state": "completed", 00:15:40.655 "digest": "sha384", 00:15:40.655 "dhgroup": "ffdhe6144" 00:15:40.655 } 00:15:40.655 } 00:15:40.655 ]' 00:15:40.655 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.655 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.655 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.655 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:40.655 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.655 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.655 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.655 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.913 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:15:40.913 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:15:41.848 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.848 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:41.848 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.848 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.848 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.848 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:41.848 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.848 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:41.848 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:42.106 18:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:15:42.106 18:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.106 18:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:42.106 18:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:42.106 18:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:42.106 18:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.106 18:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.106 18:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.106 18:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.106 18:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.106 18:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.106 18:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.106 18:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.040 00:15:43.040 18:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.040 18:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.040 18:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.297 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.297 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.297 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.297 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.297 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.297 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.297 { 00:15:43.297 "cntlid": 89, 00:15:43.297 "qid": 0, 00:15:43.297 "state": "enabled", 00:15:43.297 "thread": "nvmf_tgt_poll_group_000", 00:15:43.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:43.297 "listen_address": { 00:15:43.297 "trtype": "TCP", 00:15:43.297 "adrfam": "IPv4", 00:15:43.297 "traddr": "10.0.0.2", 00:15:43.297 "trsvcid": "4420" 00:15:43.297 }, 00:15:43.297 "peer_address": { 00:15:43.297 "trtype": "TCP", 00:15:43.297 "adrfam": "IPv4", 00:15:43.297 "traddr": "10.0.0.1", 00:15:43.297 "trsvcid": "36462" 00:15:43.297 }, 00:15:43.297 "auth": { 00:15:43.297 "state": "completed", 00:15:43.297 "digest": "sha384", 00:15:43.297 "dhgroup": "ffdhe8192" 00:15:43.297 } 00:15:43.297 } 00:15:43.297 ]' 00:15:43.297 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.297 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:43.297 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.297 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:43.297 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.554 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.554 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.554 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.812 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:15:43.812 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:15:44.744 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.744 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:44.744 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.744 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.744 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.744 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.744 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:44.744 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:45.002 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:15:45.002 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.002 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:45.002 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:45.002 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:45.002 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.002 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.002 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.002 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.002 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.002 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.002 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.002 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.934 00:15:45.934 18:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.934 18:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.934 18:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.191 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.191 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.191 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.191 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.191 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.191 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.191 { 00:15:46.191 "cntlid": 91, 00:15:46.191 "qid": 0, 00:15:46.191 "state": "enabled", 00:15:46.191 "thread": "nvmf_tgt_poll_group_000", 00:15:46.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:46.191 "listen_address": { 00:15:46.191 "trtype": "TCP", 00:15:46.191 "adrfam": "IPv4", 00:15:46.191 "traddr": "10.0.0.2", 00:15:46.191 "trsvcid": "4420" 00:15:46.191 }, 00:15:46.191 "peer_address": { 00:15:46.191 "trtype": "TCP", 00:15:46.191 "adrfam": "IPv4", 00:15:46.191 "traddr": "10.0.0.1", 00:15:46.191 "trsvcid": "36482" 00:15:46.191 }, 00:15:46.191 "auth": { 00:15:46.191 "state": "completed", 00:15:46.191 "digest": "sha384", 00:15:46.191 "dhgroup": "ffdhe8192" 00:15:46.191 } 00:15:46.191 } 00:15:46.191 ]' 00:15:46.191 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.191 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.191 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.191 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:46.191 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.191 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.191 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.191 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.757 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:15:46.757 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:15:47.689 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.689 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.689 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.689 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.689 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.689 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.689 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:47.689 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:47.947 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:15:47.947 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.947 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:47.947 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:47.947 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:47.947 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.947 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.947 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.947 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.947 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.947 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.947 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.947 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.879 00:15:48.879 18:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.879 18:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.879 18:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.137 18:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.137 18:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.137 18:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.137 18:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.137 18:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.137 18:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.137 { 00:15:49.137 "cntlid": 93, 00:15:49.137 "qid": 0, 00:15:49.137 "state": "enabled", 00:15:49.137 "thread": "nvmf_tgt_poll_group_000", 00:15:49.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:49.137 "listen_address": { 00:15:49.137 "trtype": "TCP", 00:15:49.137 "adrfam": "IPv4", 00:15:49.137 "traddr": "10.0.0.2", 00:15:49.137 "trsvcid": "4420" 00:15:49.137 }, 00:15:49.137 "peer_address": { 00:15:49.137 "trtype": "TCP", 00:15:49.137 "adrfam": "IPv4", 00:15:49.137 "traddr": "10.0.0.1", 00:15:49.137 "trsvcid": "36512" 00:15:49.137 }, 00:15:49.137 "auth": { 00:15:49.137 "state": "completed", 00:15:49.137 "digest": "sha384", 00:15:49.137 "dhgroup": "ffdhe8192" 00:15:49.137 } 00:15:49.137 } 00:15:49.137 ]' 00:15:49.137 18:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.137 18:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.137 18:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.137 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:49.137 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.137 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.137 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.137 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.394 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:15:49.394 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:15:50.326 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.326 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:50.326 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.326 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.326 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.326 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.326 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:50.326 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:50.583 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:15:50.583 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.584 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:50.584 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:50.584 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:50.584 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.584 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:50.584 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.584 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.584 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.584 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:50.584 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.584 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:51.516 00:15:51.516 18:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.516 18:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.516 18:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.774 18:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.774 18:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.774 18:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.774 18:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.774 18:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.774 18:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.774 { 00:15:51.774 "cntlid": 95, 00:15:51.774 "qid": 0, 00:15:51.774 "state": "enabled", 00:15:51.774 "thread": "nvmf_tgt_poll_group_000", 00:15:51.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:51.774 "listen_address": { 00:15:51.774 "trtype": "TCP", 00:15:51.774 "adrfam": "IPv4", 00:15:51.774 "traddr": "10.0.0.2", 00:15:51.774 "trsvcid": "4420" 00:15:51.774 }, 00:15:51.774 "peer_address": { 00:15:51.774 "trtype": "TCP", 00:15:51.774 "adrfam": "IPv4", 00:15:51.774 "traddr": "10.0.0.1", 00:15:51.774 "trsvcid": "39026" 00:15:51.774 }, 00:15:51.774 "auth": { 00:15:51.774 "state": "completed", 00:15:51.774 "digest": "sha384", 00:15:51.774 "dhgroup": "ffdhe8192" 00:15:51.774 } 00:15:51.774 } 00:15:51.774 ]' 00:15:51.774 18:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.774 18:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.774 18:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.774 18:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:51.774 18:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.031 18:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.031 18:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.031 18:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.288 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:15:52.288 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:15:53.220 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.220 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:53.220 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.220 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.220 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.220 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:53.220 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.220 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.220 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:53.220 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:53.478 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:15:53.478 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.478 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:53.478 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:53.478 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:53.478 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.478 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.478 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.478 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.478 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.478 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.478 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.478 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.735 00:15:53.735 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.735 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.735 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.992 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.992 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.992 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.992 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.992 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.992 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.992 { 00:15:53.992 "cntlid": 97, 00:15:53.992 "qid": 0, 00:15:53.992 "state": "enabled", 00:15:53.992 "thread": "nvmf_tgt_poll_group_000", 00:15:53.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:53.992 "listen_address": { 00:15:53.992 "trtype": "TCP", 00:15:53.992 "adrfam": "IPv4", 00:15:53.992 "traddr": "10.0.0.2", 00:15:53.992 "trsvcid": "4420" 00:15:53.992 }, 00:15:53.992 "peer_address": { 00:15:53.992 "trtype": "TCP", 00:15:53.992 "adrfam": "IPv4", 00:15:53.992 "traddr": "10.0.0.1", 00:15:53.992 "trsvcid": "39062" 00:15:53.992 }, 00:15:53.992 "auth": { 00:15:53.992 "state": "completed", 00:15:53.992 "digest": "sha512", 00:15:53.992 "dhgroup": "null" 00:15:53.992 } 00:15:53.992 } 00:15:53.992 ]' 00:15:53.992 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.992 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.992 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.992 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:53.992 18:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.992 18:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.992 18:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.992 18:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.572 18:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:15:54.572 18:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:15:55.143 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.143 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.143 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.143 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.143 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.143 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.143 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:55.143 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:55.708 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:15:55.708 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.708 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:55.708 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:55.708 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:55.708 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.708 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.708 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.708 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.708 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.708 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.708 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.708 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.966 00:15:55.966 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.966 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.966 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.223 18:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.223 18:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.223 18:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.223 18:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.223 18:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.223 18:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.223 { 00:15:56.223 "cntlid": 99, 00:15:56.223 "qid": 0, 00:15:56.223 "state": "enabled", 00:15:56.223 "thread": "nvmf_tgt_poll_group_000", 00:15:56.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:56.223 "listen_address": { 00:15:56.223 "trtype": "TCP", 00:15:56.223 "adrfam": "IPv4", 00:15:56.223 "traddr": "10.0.0.2", 00:15:56.223 "trsvcid": "4420" 00:15:56.223 }, 00:15:56.223 "peer_address": { 00:15:56.223 "trtype": "TCP", 00:15:56.223 "adrfam": "IPv4", 00:15:56.223 "traddr": "10.0.0.1", 00:15:56.223 "trsvcid": "39096" 00:15:56.223 }, 00:15:56.223 "auth": { 00:15:56.223 "state": "completed", 00:15:56.223 "digest": "sha512", 00:15:56.223 "dhgroup": "null" 00:15:56.223 } 00:15:56.223 } 00:15:56.223 ]' 00:15:56.223 18:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.223 18:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:56.223 18:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.223 18:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:56.224 18:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.481 18:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.481 18:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.481 18:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.738 18:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:15:56.738 18:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:15:57.671 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.671 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.671 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.671 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.671 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.671 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.671 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:57.671 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:57.928 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:15:57.929 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.929 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:57.929 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:57.929 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:57.929 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.929 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.929 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.929 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.929 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.929 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.929 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.929 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.186 00:15:58.186 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.186 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.186 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.751 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.751 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.751 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.751 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.751 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.751 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.751 { 00:15:58.751 "cntlid": 101, 00:15:58.751 "qid": 0, 00:15:58.751 "state": "enabled", 00:15:58.751 "thread": "nvmf_tgt_poll_group_000", 00:15:58.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:58.751 "listen_address": { 00:15:58.751 "trtype": "TCP", 00:15:58.751 "adrfam": "IPv4", 00:15:58.751 "traddr": "10.0.0.2", 00:15:58.751 "trsvcid": "4420" 00:15:58.751 }, 00:15:58.751 "peer_address": { 00:15:58.751 "trtype": "TCP", 00:15:58.751 "adrfam": "IPv4", 00:15:58.751 "traddr": "10.0.0.1", 00:15:58.751 "trsvcid": "39112" 00:15:58.751 }, 00:15:58.751 "auth": { 00:15:58.751 "state": "completed", 00:15:58.751 "digest": "sha512", 00:15:58.751 "dhgroup": "null" 00:15:58.751 } 00:15:58.751 } 00:15:58.751 ]' 00:15:58.751 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.751 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:58.751 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.751 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:58.751 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.751 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.751 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.751 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.008 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:15:59.008 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:15:59.940 18:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.940 18:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:59.940 18:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.940 18:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.940 18:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.940 18:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.940 18:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:59.940 18:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:00.197 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:00.197 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.197 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:00.197 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:00.197 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:00.197 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.197 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:00.197 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.197 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.197 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.197 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:00.197 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.197 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.455 00:16:00.455 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.455 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.455 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.713 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.713 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.713 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.713 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.713 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.713 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.713 { 00:16:00.713 "cntlid": 103, 00:16:00.713 "qid": 0, 00:16:00.713 "state": "enabled", 00:16:00.713 "thread": "nvmf_tgt_poll_group_000", 00:16:00.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:00.713 "listen_address": { 00:16:00.713 "trtype": "TCP", 00:16:00.713 "adrfam": "IPv4", 00:16:00.713 "traddr": "10.0.0.2", 00:16:00.713 "trsvcid": "4420" 00:16:00.713 }, 00:16:00.713 "peer_address": { 00:16:00.713 "trtype": "TCP", 00:16:00.713 "adrfam": "IPv4", 00:16:00.713 "traddr": "10.0.0.1", 00:16:00.713 "trsvcid": "59650" 00:16:00.713 }, 00:16:00.713 "auth": { 00:16:00.713 "state": "completed", 00:16:00.713 "digest": "sha512", 00:16:00.713 "dhgroup": "null" 00:16:00.713 } 00:16:00.713 } 00:16:00.713 ]' 00:16:00.713 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.971 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:00.971 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.971 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:00.971 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.971 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.971 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.971 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.229 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:16:01.229 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:16:02.160 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.160 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:02.160 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.160 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.160 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.160 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:02.161 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.161 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:02.161 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:02.417 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:02.417 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.417 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:02.417 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:02.417 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:02.417 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.417 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.417 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.417 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.417 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.417 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.417 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.417 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.674 00:16:02.674 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.674 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.674 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.239 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.239 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.239 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.239 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.239 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.239 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.239 { 00:16:03.239 "cntlid": 105, 00:16:03.239 "qid": 0, 00:16:03.239 "state": "enabled", 00:16:03.239 "thread": "nvmf_tgt_poll_group_000", 00:16:03.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:03.239 "listen_address": { 00:16:03.239 "trtype": "TCP", 00:16:03.239 "adrfam": "IPv4", 00:16:03.239 "traddr": "10.0.0.2", 00:16:03.239 "trsvcid": "4420" 00:16:03.239 }, 00:16:03.239 "peer_address": { 00:16:03.239 "trtype": "TCP", 00:16:03.239 "adrfam": "IPv4", 00:16:03.239 "traddr": "10.0.0.1", 00:16:03.239 "trsvcid": "59690" 00:16:03.239 }, 00:16:03.239 "auth": { 00:16:03.239 "state": "completed", 00:16:03.239 "digest": "sha512", 00:16:03.239 "dhgroup": "ffdhe2048" 00:16:03.239 } 00:16:03.239 } 00:16:03.239 ]' 00:16:03.239 18:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.239 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.239 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.239 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:03.239 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.239 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.239 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.239 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.496 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:16:03.496 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:16:04.428 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.428 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.428 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.428 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.428 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.428 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.428 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:04.428 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:04.686 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:04.686 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.686 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:04.686 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:04.686 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:04.686 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.686 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.686 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.686 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.686 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.686 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.686 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.686 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.943 00:16:04.943 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.944 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.944 18:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.202 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.202 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.202 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.202 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.202 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.202 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.202 { 00:16:05.202 "cntlid": 107, 00:16:05.202 "qid": 0, 00:16:05.202 "state": "enabled", 00:16:05.202 "thread": "nvmf_tgt_poll_group_000", 00:16:05.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:05.202 "listen_address": { 00:16:05.202 "trtype": "TCP", 00:16:05.202 "adrfam": "IPv4", 00:16:05.202 "traddr": "10.0.0.2", 00:16:05.202 "trsvcid": "4420" 00:16:05.202 }, 00:16:05.202 "peer_address": { 00:16:05.202 "trtype": "TCP", 00:16:05.202 "adrfam": "IPv4", 00:16:05.202 "traddr": "10.0.0.1", 00:16:05.202 "trsvcid": "59722" 00:16:05.202 }, 00:16:05.202 "auth": { 00:16:05.202 "state": "completed", 00:16:05.202 "digest": "sha512", 00:16:05.202 "dhgroup": "ffdhe2048" 00:16:05.202 } 00:16:05.202 } 00:16:05.202 ]' 00:16:05.202 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.202 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.202 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.202 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:05.202 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.459 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.459 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.459 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.717 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:16:05.717 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:16:06.649 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.649 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.649 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.649 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.649 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.649 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.649 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:06.649 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:06.906 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:06.906 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.907 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:06.907 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:06.907 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:06.907 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.907 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.907 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.907 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.907 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.907 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.907 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.907 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.164 00:16:07.164 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.164 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.164 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.421 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.421 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.421 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.421 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.421 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.421 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.421 { 00:16:07.421 "cntlid": 109, 00:16:07.421 "qid": 0, 00:16:07.421 "state": "enabled", 00:16:07.421 "thread": "nvmf_tgt_poll_group_000", 00:16:07.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:07.421 "listen_address": { 00:16:07.421 "trtype": "TCP", 00:16:07.421 "adrfam": "IPv4", 00:16:07.421 "traddr": "10.0.0.2", 00:16:07.421 "trsvcid": "4420" 00:16:07.421 }, 00:16:07.421 "peer_address": { 00:16:07.421 "trtype": "TCP", 00:16:07.421 "adrfam": "IPv4", 00:16:07.421 "traddr": "10.0.0.1", 00:16:07.421 "trsvcid": "59754" 00:16:07.421 }, 00:16:07.421 "auth": { 00:16:07.421 "state": "completed", 00:16:07.421 "digest": "sha512", 00:16:07.421 "dhgroup": "ffdhe2048" 00:16:07.421 } 00:16:07.421 } 00:16:07.421 ]' 00:16:07.421 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.421 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.421 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.421 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:07.421 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.678 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.678 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.678 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.936 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:16:07.936 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:16:08.867 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.867 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.867 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.867 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.867 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.867 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.867 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:08.868 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:09.124 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:09.124 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.124 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:09.124 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:09.124 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:09.124 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.124 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:09.124 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.124 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.124 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.124 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:09.124 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:09.124 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:09.384 00:16:09.384 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.384 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.384 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.949 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.949 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.949 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.949 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.949 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.949 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.949 { 00:16:09.949 "cntlid": 111, 00:16:09.949 "qid": 0, 00:16:09.949 "state": "enabled", 00:16:09.949 "thread": "nvmf_tgt_poll_group_000", 00:16:09.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:09.949 "listen_address": { 00:16:09.949 "trtype": "TCP", 00:16:09.949 "adrfam": "IPv4", 00:16:09.949 "traddr": "10.0.0.2", 00:16:09.949 "trsvcid": "4420" 00:16:09.949 }, 00:16:09.949 "peer_address": { 00:16:09.949 "trtype": "TCP", 00:16:09.949 "adrfam": "IPv4", 00:16:09.949 "traddr": "10.0.0.1", 00:16:09.949 "trsvcid": "49354" 00:16:09.949 }, 00:16:09.949 "auth": { 00:16:09.949 "state": "completed", 00:16:09.949 "digest": "sha512", 00:16:09.949 "dhgroup": "ffdhe2048" 00:16:09.949 } 00:16:09.949 } 00:16:09.949 ]' 00:16:09.949 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.950 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.950 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.950 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:09.950 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.950 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.950 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.950 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.207 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:16:10.207 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:16:11.139 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.139 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.139 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.139 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.139 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.139 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:11.139 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.139 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:11.139 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:11.396 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:11.396 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.396 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:11.396 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:11.396 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:11.396 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.396 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.396 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.396 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.397 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.397 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.397 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.397 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.654 00:16:11.654 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.654 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.654 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.911 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.911 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.911 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.911 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.911 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.911 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.911 { 00:16:11.911 "cntlid": 113, 00:16:11.911 "qid": 0, 00:16:11.911 "state": "enabled", 00:16:11.911 "thread": "nvmf_tgt_poll_group_000", 00:16:11.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:11.911 "listen_address": { 00:16:11.911 "trtype": "TCP", 00:16:11.911 "adrfam": "IPv4", 00:16:11.911 "traddr": "10.0.0.2", 00:16:11.911 "trsvcid": "4420" 00:16:11.911 }, 00:16:11.911 "peer_address": { 00:16:11.911 "trtype": "TCP", 00:16:11.911 "adrfam": "IPv4", 00:16:11.911 "traddr": "10.0.0.1", 00:16:11.911 "trsvcid": "49380" 00:16:11.911 }, 00:16:11.911 "auth": { 00:16:11.911 "state": "completed", 00:16:11.911 "digest": "sha512", 00:16:11.911 "dhgroup": "ffdhe3072" 00:16:11.911 } 00:16:11.911 } 00:16:11.911 ]' 00:16:11.911 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.169 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.169 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.169 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:12.169 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.169 18:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.169 18:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.169 18:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.426 18:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:16:12.426 18:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:16:13.356 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.356 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.356 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.356 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.356 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.356 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.356 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:13.356 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:13.612 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:13.612 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.612 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:13.612 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:13.612 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:13.612 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.612 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.613 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.613 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.613 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.613 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.613 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.613 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.870 00:16:13.870 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.870 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.870 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.128 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.128 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.128 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.128 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.128 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.128 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.128 { 00:16:14.128 "cntlid": 115, 00:16:14.128 "qid": 0, 00:16:14.128 "state": "enabled", 00:16:14.128 "thread": "nvmf_tgt_poll_group_000", 00:16:14.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:14.128 "listen_address": { 00:16:14.128 "trtype": "TCP", 00:16:14.128 "adrfam": "IPv4", 00:16:14.128 "traddr": "10.0.0.2", 00:16:14.128 "trsvcid": "4420" 00:16:14.128 }, 00:16:14.128 "peer_address": { 00:16:14.128 "trtype": "TCP", 00:16:14.128 "adrfam": "IPv4", 00:16:14.128 "traddr": "10.0.0.1", 00:16:14.128 "trsvcid": "49404" 00:16:14.128 }, 00:16:14.128 "auth": { 00:16:14.128 "state": "completed", 00:16:14.128 "digest": "sha512", 00:16:14.128 "dhgroup": "ffdhe3072" 00:16:14.128 } 00:16:14.128 } 00:16:14.128 ]' 00:16:14.128 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.128 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.128 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.386 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:14.386 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.386 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.386 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.386 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.644 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:16:14.644 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:16:15.576 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.576 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:15.576 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.576 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.576 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.576 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.576 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:15.576 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:15.833 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:15.833 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.833 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:15.833 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:15.833 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:15.833 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.833 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.833 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.833 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.833 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.833 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.833 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.833 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.091 00:16:16.091 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.091 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.091 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.348 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.348 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.348 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.348 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.348 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.348 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.348 { 00:16:16.348 "cntlid": 117, 00:16:16.348 "qid": 0, 00:16:16.348 "state": "enabled", 00:16:16.348 "thread": "nvmf_tgt_poll_group_000", 00:16:16.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:16.348 "listen_address": { 00:16:16.348 "trtype": "TCP", 00:16:16.348 "adrfam": "IPv4", 00:16:16.348 "traddr": "10.0.0.2", 00:16:16.348 "trsvcid": "4420" 00:16:16.348 }, 00:16:16.348 "peer_address": { 00:16:16.348 "trtype": "TCP", 00:16:16.348 "adrfam": "IPv4", 00:16:16.348 "traddr": "10.0.0.1", 00:16:16.349 "trsvcid": "49430" 00:16:16.349 }, 00:16:16.349 "auth": { 00:16:16.349 "state": "completed", 00:16:16.349 "digest": "sha512", 00:16:16.349 "dhgroup": "ffdhe3072" 00:16:16.349 } 00:16:16.349 } 00:16:16.349 ]' 00:16:16.349 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.349 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:16.349 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.606 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:16.606 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.606 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.606 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.606 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.864 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:16:16.864 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:16:17.799 18:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.799 18:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.799 18:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.799 18:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.799 18:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.799 18:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.799 18:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:17.799 18:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:18.057 18:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:18.057 18:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.057 18:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:18.057 18:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:18.057 18:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:18.057 18:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.057 18:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:18.057 18:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.057 18:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.057 18:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.057 18:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:18.057 18:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.057 18:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.314 00:16:18.314 18:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.314 18:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.314 18:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.572 18:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.572 18:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.572 18:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.572 18:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.572 18:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.572 18:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.572 { 00:16:18.572 "cntlid": 119, 00:16:18.572 "qid": 0, 00:16:18.572 "state": "enabled", 00:16:18.572 "thread": "nvmf_tgt_poll_group_000", 00:16:18.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:18.572 "listen_address": { 00:16:18.572 "trtype": "TCP", 00:16:18.572 "adrfam": "IPv4", 00:16:18.572 "traddr": "10.0.0.2", 00:16:18.572 "trsvcid": "4420" 00:16:18.572 }, 00:16:18.572 "peer_address": { 00:16:18.572 "trtype": "TCP", 00:16:18.572 "adrfam": "IPv4", 00:16:18.572 "traddr": "10.0.0.1", 00:16:18.572 "trsvcid": "49458" 00:16:18.572 }, 00:16:18.572 "auth": { 00:16:18.572 "state": "completed", 00:16:18.572 "digest": "sha512", 00:16:18.572 "dhgroup": "ffdhe3072" 00:16:18.572 } 00:16:18.572 } 00:16:18.572 ]' 00:16:18.572 18:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.572 18:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.572 18:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.830 18:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:18.830 18:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.830 18:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.830 18:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.830 18:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.087 18:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:16:19.087 18:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:16:20.112 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.112 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:20.112 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.112 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.112 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.112 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.112 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.112 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:20.112 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:20.370 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:20.370 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.370 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:20.370 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:20.370 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:20.370 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.370 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.370 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.370 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.370 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.370 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.370 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.370 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.628 00:16:20.628 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.628 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.628 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.886 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.886 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.886 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.886 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.886 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.886 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.886 { 00:16:20.886 "cntlid": 121, 00:16:20.886 "qid": 0, 00:16:20.886 "state": "enabled", 00:16:20.886 "thread": "nvmf_tgt_poll_group_000", 00:16:20.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:20.886 "listen_address": { 00:16:20.886 "trtype": "TCP", 00:16:20.886 "adrfam": "IPv4", 00:16:20.886 "traddr": "10.0.0.2", 00:16:20.886 "trsvcid": "4420" 00:16:20.886 }, 00:16:20.886 "peer_address": { 00:16:20.886 "trtype": "TCP", 00:16:20.886 "adrfam": "IPv4", 00:16:20.886 "traddr": "10.0.0.1", 00:16:20.886 "trsvcid": "34588" 00:16:20.886 }, 00:16:20.886 "auth": { 00:16:20.886 "state": "completed", 00:16:20.886 "digest": "sha512", 00:16:20.886 "dhgroup": "ffdhe4096" 00:16:20.886 } 00:16:20.886 } 00:16:20.886 ]' 00:16:20.886 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.886 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.886 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.144 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:21.144 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.144 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.144 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.144 18:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.402 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:16:21.402 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:16:22.339 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.339 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:22.339 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.339 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.339 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.339 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.339 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:22.339 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:22.597 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:22.597 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.597 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:22.597 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:22.597 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:22.597 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.597 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.597 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.597 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.597 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.597 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.597 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.597 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.855 00:16:22.855 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.855 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.855 18:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.113 18:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.113 18:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.113 18:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.113 18:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.113 18:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.113 18:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.113 { 00:16:23.113 "cntlid": 123, 00:16:23.113 "qid": 0, 00:16:23.113 "state": "enabled", 00:16:23.113 "thread": "nvmf_tgt_poll_group_000", 00:16:23.113 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:23.113 "listen_address": { 00:16:23.113 "trtype": "TCP", 00:16:23.113 "adrfam": "IPv4", 00:16:23.113 "traddr": "10.0.0.2", 00:16:23.113 "trsvcid": "4420" 00:16:23.113 }, 00:16:23.113 "peer_address": { 00:16:23.113 "trtype": "TCP", 00:16:23.113 "adrfam": "IPv4", 00:16:23.113 "traddr": "10.0.0.1", 00:16:23.113 "trsvcid": "34620" 00:16:23.113 }, 00:16:23.113 "auth": { 00:16:23.113 "state": "completed", 00:16:23.113 "digest": "sha512", 00:16:23.113 "dhgroup": "ffdhe4096" 00:16:23.113 } 00:16:23.113 } 00:16:23.113 ]' 00:16:23.113 18:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.371 18:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.371 18:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.371 18:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:23.371 18:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.371 18:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.371 18:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.371 18:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.629 18:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:16:23.629 18:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:16:24.567 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.567 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:24.567 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.567 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.567 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.567 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.567 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:24.567 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:24.825 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:24.825 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.825 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:24.825 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:24.825 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:24.825 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.825 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.825 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.825 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.825 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.825 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.825 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.825 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.083 00:16:25.083 18:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.083 18:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.083 18:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.341 18:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.341 18:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.341 18:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.341 18:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.600 18:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.600 18:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.600 { 00:16:25.600 "cntlid": 125, 00:16:25.600 "qid": 0, 00:16:25.600 "state": "enabled", 00:16:25.600 "thread": "nvmf_tgt_poll_group_000", 00:16:25.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:25.600 "listen_address": { 00:16:25.600 "trtype": "TCP", 00:16:25.600 "adrfam": "IPv4", 00:16:25.600 "traddr": "10.0.0.2", 00:16:25.600 "trsvcid": "4420" 00:16:25.600 }, 00:16:25.600 "peer_address": { 00:16:25.600 "trtype": "TCP", 00:16:25.600 "adrfam": "IPv4", 00:16:25.600 "traddr": "10.0.0.1", 00:16:25.600 "trsvcid": "34658" 00:16:25.600 }, 00:16:25.600 "auth": { 00:16:25.600 "state": "completed", 00:16:25.600 "digest": "sha512", 00:16:25.600 "dhgroup": "ffdhe4096" 00:16:25.600 } 00:16:25.600 } 00:16:25.600 ]' 00:16:25.600 18:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.600 18:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.600 18:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.600 18:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:25.600 18:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.600 18:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.600 18:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.600 18:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.859 18:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:16:25.859 18:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:16:26.795 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.795 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.795 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.795 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.795 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.795 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.795 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:26.795 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:27.053 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:27.053 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.053 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:27.053 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:27.053 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:27.053 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.053 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:27.053 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.053 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.053 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.053 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:27.053 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.053 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.618 00:16:27.618 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.618 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.618 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.876 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.876 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.876 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.876 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.876 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.876 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.876 { 00:16:27.876 "cntlid": 127, 00:16:27.876 "qid": 0, 00:16:27.876 "state": "enabled", 00:16:27.876 "thread": "nvmf_tgt_poll_group_000", 00:16:27.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:27.876 "listen_address": { 00:16:27.876 "trtype": "TCP", 00:16:27.876 "adrfam": "IPv4", 00:16:27.876 "traddr": "10.0.0.2", 00:16:27.876 "trsvcid": "4420" 00:16:27.876 }, 00:16:27.876 "peer_address": { 00:16:27.876 "trtype": "TCP", 00:16:27.876 "adrfam": "IPv4", 00:16:27.876 "traddr": "10.0.0.1", 00:16:27.876 "trsvcid": "34674" 00:16:27.876 }, 00:16:27.876 "auth": { 00:16:27.876 "state": "completed", 00:16:27.876 "digest": "sha512", 00:16:27.876 "dhgroup": "ffdhe4096" 00:16:27.876 } 00:16:27.876 } 00:16:27.876 ]' 00:16:27.876 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.876 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.876 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.876 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:27.876 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.876 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.876 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.876 18:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.133 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:16:28.133 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:16:29.066 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.066 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:29.066 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.066 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.066 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.066 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.066 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.066 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:29.066 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:29.324 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:29.324 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.324 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.324 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:29.324 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:29.324 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.324 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.324 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.324 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.324 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.324 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.324 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.324 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.889 00:16:29.889 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.889 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.889 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.146 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.146 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.146 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.146 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.146 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.146 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.146 { 00:16:30.146 "cntlid": 129, 00:16:30.146 "qid": 0, 00:16:30.146 "state": "enabled", 00:16:30.146 "thread": "nvmf_tgt_poll_group_000", 00:16:30.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:30.146 "listen_address": { 00:16:30.146 "trtype": "TCP", 00:16:30.146 "adrfam": "IPv4", 00:16:30.146 "traddr": "10.0.0.2", 00:16:30.146 "trsvcid": "4420" 00:16:30.146 }, 00:16:30.146 "peer_address": { 00:16:30.146 "trtype": "TCP", 00:16:30.146 "adrfam": "IPv4", 00:16:30.146 "traddr": "10.0.0.1", 00:16:30.146 "trsvcid": "38686" 00:16:30.146 }, 00:16:30.146 "auth": { 00:16:30.146 "state": "completed", 00:16:30.146 "digest": "sha512", 00:16:30.146 "dhgroup": "ffdhe6144" 00:16:30.146 } 00:16:30.146 } 00:16:30.146 ]' 00:16:30.146 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.404 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.404 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.404 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:30.404 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.404 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.404 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.404 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.661 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:16:30.661 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:16:31.594 18:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.594 18:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.594 18:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.594 18:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.594 18:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.594 18:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.594 18:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:31.594 18:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:31.851 18:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:31.851 18:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.851 18:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.851 18:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:31.851 18:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:31.851 18:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.851 18:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.851 18:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.851 18:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.851 18:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.851 18:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.851 18:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.851 18:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.416 00:16:32.416 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.416 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.416 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.673 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.673 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.673 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.673 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.673 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.673 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.673 { 00:16:32.673 "cntlid": 131, 00:16:32.674 "qid": 0, 00:16:32.674 "state": "enabled", 00:16:32.674 "thread": "nvmf_tgt_poll_group_000", 00:16:32.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:32.674 "listen_address": { 00:16:32.674 "trtype": "TCP", 00:16:32.674 "adrfam": "IPv4", 00:16:32.674 "traddr": "10.0.0.2", 00:16:32.674 "trsvcid": "4420" 00:16:32.674 }, 00:16:32.674 "peer_address": { 00:16:32.674 "trtype": "TCP", 00:16:32.674 "adrfam": "IPv4", 00:16:32.674 "traddr": "10.0.0.1", 00:16:32.674 "trsvcid": "38716" 00:16:32.674 }, 00:16:32.674 "auth": { 00:16:32.674 "state": "completed", 00:16:32.674 "digest": "sha512", 00:16:32.674 "dhgroup": "ffdhe6144" 00:16:32.674 } 00:16:32.674 } 00:16:32.674 ]' 00:16:32.674 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.674 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.674 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.674 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:32.674 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.674 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.674 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.674 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.931 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:16:32.931 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:16:33.863 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.864 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.864 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.864 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.864 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.864 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.864 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:33.864 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:34.429 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:34.429 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.429 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:34.429 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:34.429 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:34.429 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.429 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.429 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.429 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.429 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.429 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.429 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.429 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.994 00:16:34.994 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.994 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.994 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.251 18:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.251 18:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.251 18:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.251 18:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.252 18:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.252 18:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.252 { 00:16:35.252 "cntlid": 133, 00:16:35.252 "qid": 0, 00:16:35.252 "state": "enabled", 00:16:35.252 "thread": "nvmf_tgt_poll_group_000", 00:16:35.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:35.252 "listen_address": { 00:16:35.252 "trtype": "TCP", 00:16:35.252 "adrfam": "IPv4", 00:16:35.252 "traddr": "10.0.0.2", 00:16:35.252 "trsvcid": "4420" 00:16:35.252 }, 00:16:35.252 "peer_address": { 00:16:35.252 "trtype": "TCP", 00:16:35.252 "adrfam": "IPv4", 00:16:35.252 "traddr": "10.0.0.1", 00:16:35.252 "trsvcid": "38742" 00:16:35.252 }, 00:16:35.252 "auth": { 00:16:35.252 "state": "completed", 00:16:35.252 "digest": "sha512", 00:16:35.252 "dhgroup": "ffdhe6144" 00:16:35.252 } 00:16:35.252 } 00:16:35.252 ]' 00:16:35.252 18:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.252 18:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.252 18:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.252 18:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:35.252 18:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.252 18:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.252 18:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.252 18:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.509 18:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:16:35.509 18:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:16:36.444 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.444 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:36.444 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.444 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.444 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.444 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.444 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:36.444 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:36.704 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:36.704 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.704 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.704 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:36.704 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:36.704 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.704 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:36.704 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.704 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.704 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.704 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:36.704 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.704 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.269 00:16:37.269 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.269 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.269 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.526 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.526 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.526 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.526 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.526 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.526 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.526 { 00:16:37.526 "cntlid": 135, 00:16:37.526 "qid": 0, 00:16:37.526 "state": "enabled", 00:16:37.526 "thread": "nvmf_tgt_poll_group_000", 00:16:37.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:37.526 "listen_address": { 00:16:37.526 "trtype": "TCP", 00:16:37.526 "adrfam": "IPv4", 00:16:37.526 "traddr": "10.0.0.2", 00:16:37.526 "trsvcid": "4420" 00:16:37.526 }, 00:16:37.526 "peer_address": { 00:16:37.526 "trtype": "TCP", 00:16:37.526 "adrfam": "IPv4", 00:16:37.526 "traddr": "10.0.0.1", 00:16:37.526 "trsvcid": "38756" 00:16:37.526 }, 00:16:37.526 "auth": { 00:16:37.526 "state": "completed", 00:16:37.526 "digest": "sha512", 00:16:37.526 "dhgroup": "ffdhe6144" 00:16:37.526 } 00:16:37.526 } 00:16:37.526 ]' 00:16:37.526 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.526 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.526 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.784 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:37.784 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.784 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.784 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.784 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.041 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:16:38.041 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:16:38.974 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.974 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.974 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.974 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.974 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.974 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.974 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.974 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:38.974 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:39.231 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:39.231 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.231 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:39.231 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:39.231 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.231 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.231 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.231 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.231 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.231 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.231 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.231 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.231 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.164 00:16:40.164 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.164 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.164 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.164 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.164 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.164 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.164 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.421 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.421 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.421 { 00:16:40.421 "cntlid": 137, 00:16:40.421 "qid": 0, 00:16:40.421 "state": "enabled", 00:16:40.421 "thread": "nvmf_tgt_poll_group_000", 00:16:40.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:40.421 "listen_address": { 00:16:40.421 "trtype": "TCP", 00:16:40.421 "adrfam": "IPv4", 00:16:40.421 "traddr": "10.0.0.2", 00:16:40.421 "trsvcid": "4420" 00:16:40.421 }, 00:16:40.421 "peer_address": { 00:16:40.421 "trtype": "TCP", 00:16:40.421 "adrfam": "IPv4", 00:16:40.421 "traddr": "10.0.0.1", 00:16:40.421 "trsvcid": "50518" 00:16:40.421 }, 00:16:40.421 "auth": { 00:16:40.421 "state": "completed", 00:16:40.421 "digest": "sha512", 00:16:40.421 "dhgroup": "ffdhe8192" 00:16:40.421 } 00:16:40.421 } 00:16:40.421 ]' 00:16:40.421 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.421 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.421 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.421 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:40.421 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.421 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.421 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.421 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.679 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:16:40.679 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:16:41.612 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.612 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:41.612 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.612 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.612 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.612 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.612 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:41.612 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:41.870 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:41.870 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.870 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.870 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:41.870 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.870 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.870 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.870 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.870 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.870 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.870 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.870 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.870 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.801 00:16:42.801 18:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.801 18:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.801 18:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.059 18:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.059 18:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.059 18:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.059 18:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.059 18:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.059 18:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.059 { 00:16:43.059 "cntlid": 139, 00:16:43.059 "qid": 0, 00:16:43.059 "state": "enabled", 00:16:43.059 "thread": "nvmf_tgt_poll_group_000", 00:16:43.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:43.059 "listen_address": { 00:16:43.059 "trtype": "TCP", 00:16:43.059 "adrfam": "IPv4", 00:16:43.059 "traddr": "10.0.0.2", 00:16:43.059 "trsvcid": "4420" 00:16:43.059 }, 00:16:43.059 "peer_address": { 00:16:43.059 "trtype": "TCP", 00:16:43.059 "adrfam": "IPv4", 00:16:43.059 "traddr": "10.0.0.1", 00:16:43.059 "trsvcid": "50534" 00:16:43.059 }, 00:16:43.059 "auth": { 00:16:43.059 "state": "completed", 00:16:43.059 "digest": "sha512", 00:16:43.059 "dhgroup": "ffdhe8192" 00:16:43.059 } 00:16:43.059 } 00:16:43.059 ]' 00:16:43.059 18:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.059 18:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.059 18:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.059 18:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:43.059 18:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.059 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.059 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.059 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.317 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:16:43.317 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: --dhchap-ctrl-secret DHHC-1:02:MDg3YmU3MjRkMzVhOTZlNzdiMmI5M2I5OTAwMjg2YTUzZWU2YWM3MTFkNWMzNjcwoROWPQ==: 00:16:44.249 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.249 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.249 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.249 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.249 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.249 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.249 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:44.249 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:44.514 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:44.514 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.514 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.514 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:44.514 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:44.514 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.515 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.515 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.515 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.515 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.515 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.515 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.515 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.547 00:16:45.547 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.547 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.547 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.805 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.805 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.805 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.805 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.805 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.805 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.805 { 00:16:45.805 "cntlid": 141, 00:16:45.805 "qid": 0, 00:16:45.805 "state": "enabled", 00:16:45.805 "thread": "nvmf_tgt_poll_group_000", 00:16:45.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:45.805 "listen_address": { 00:16:45.805 "trtype": "TCP", 00:16:45.805 "adrfam": "IPv4", 00:16:45.805 "traddr": "10.0.0.2", 00:16:45.805 "trsvcid": "4420" 00:16:45.805 }, 00:16:45.805 "peer_address": { 00:16:45.805 "trtype": "TCP", 00:16:45.805 "adrfam": "IPv4", 00:16:45.805 "traddr": "10.0.0.1", 00:16:45.805 "trsvcid": "50542" 00:16:45.805 }, 00:16:45.805 "auth": { 00:16:45.805 "state": "completed", 00:16:45.805 "digest": "sha512", 00:16:45.805 "dhgroup": "ffdhe8192" 00:16:45.805 } 00:16:45.805 } 00:16:45.805 ]' 00:16:45.805 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.805 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.805 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.805 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:45.805 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.805 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.805 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.805 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.063 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:16:46.063 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:01:MmUwOTFiY2MyODgwNTJmMWFlMjcyOWVkNWRiMjRjZjIM0Mcu: 00:16:46.996 18:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.996 18:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.996 18:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.996 18:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.996 18:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.996 18:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.996 18:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:46.996 18:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:47.253 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:47.253 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.254 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:47.254 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:47.254 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:47.254 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.254 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:47.254 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.254 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.254 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.254 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:47.254 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.254 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.187 00:16:48.187 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.187 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.187 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.445 18:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.445 18:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.445 18:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.445 18:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.445 18:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.445 18:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.445 { 00:16:48.445 "cntlid": 143, 00:16:48.445 "qid": 0, 00:16:48.445 "state": "enabled", 00:16:48.445 "thread": "nvmf_tgt_poll_group_000", 00:16:48.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:48.445 "listen_address": { 00:16:48.445 "trtype": "TCP", 00:16:48.445 "adrfam": "IPv4", 00:16:48.445 "traddr": "10.0.0.2", 00:16:48.445 "trsvcid": "4420" 00:16:48.445 }, 00:16:48.445 "peer_address": { 00:16:48.445 "trtype": "TCP", 00:16:48.445 "adrfam": "IPv4", 00:16:48.445 "traddr": "10.0.0.1", 00:16:48.445 "trsvcid": "50578" 00:16:48.445 }, 00:16:48.445 "auth": { 00:16:48.445 "state": "completed", 00:16:48.445 "digest": "sha512", 00:16:48.445 "dhgroup": "ffdhe8192" 00:16:48.445 } 00:16:48.445 } 00:16:48.445 ]' 00:16:48.445 18:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.445 18:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.445 18:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.445 18:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:48.445 18:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.445 18:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.445 18:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.445 18:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.703 18:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:16:48.703 18:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:16:49.636 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.636 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.636 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.636 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.636 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.636 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:49.636 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:49.636 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:49.636 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:49.636 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:49.636 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:49.894 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:49.894 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.894 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.894 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:49.894 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:49.894 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.894 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.894 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.894 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.152 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.152 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.152 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.152 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.717 00:16:50.974 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.974 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.974 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.232 18:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.232 18:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.232 18:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.232 18:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.232 18:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.232 18:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.232 { 00:16:51.232 "cntlid": 145, 00:16:51.232 "qid": 0, 00:16:51.232 "state": "enabled", 00:16:51.232 "thread": "nvmf_tgt_poll_group_000", 00:16:51.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:51.232 "listen_address": { 00:16:51.232 "trtype": "TCP", 00:16:51.232 "adrfam": "IPv4", 00:16:51.232 "traddr": "10.0.0.2", 00:16:51.232 "trsvcid": "4420" 00:16:51.232 }, 00:16:51.232 "peer_address": { 00:16:51.232 "trtype": "TCP", 00:16:51.232 "adrfam": "IPv4", 00:16:51.232 "traddr": "10.0.0.1", 00:16:51.232 "trsvcid": "60118" 00:16:51.232 }, 00:16:51.232 "auth": { 00:16:51.232 "state": "completed", 00:16:51.232 "digest": "sha512", 00:16:51.232 "dhgroup": "ffdhe8192" 00:16:51.232 } 00:16:51.232 } 00:16:51.232 ]' 00:16:51.232 18:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.232 18:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.232 18:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.232 18:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.232 18:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.232 18:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.232 18:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.232 18:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.490 18:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:16:51.490 18:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:OTYwYmEwMDRmZmQxYWE2ZjJmMDIwOWYzNzNjYTA0YzBiNmQxYWM0NmYwMTBkYjUxHVx71Q==: --dhchap-ctrl-secret DHHC-1:03:NzM1ZWJmZTljNWVlZGU0N2E2MjViMjYzMjRiMDY4NTFjZTU5MGU2MDM1ZDFkYmI1MTUwYmQ3YTUxMmMyYjcwNZCDojc=: 00:16:52.423 18:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.423 18:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:52.423 18:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.423 18:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.423 18:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.423 18:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:52.423 18:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.423 18:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.423 18:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.423 18:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:52.423 18:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:52.423 18:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:52.423 18:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:52.423 18:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:52.423 18:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:52.423 18:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:52.423 18:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:52.423 18:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:52.423 18:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:53.356 request: 00:16:53.356 { 00:16:53.356 "name": "nvme0", 00:16:53.356 "trtype": "tcp", 00:16:53.356 "traddr": "10.0.0.2", 00:16:53.356 "adrfam": "ipv4", 00:16:53.356 "trsvcid": "4420", 00:16:53.356 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:53.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:53.356 "prchk_reftag": false, 00:16:53.356 "prchk_guard": false, 00:16:53.356 "hdgst": false, 00:16:53.356 "ddgst": false, 00:16:53.356 "dhchap_key": "key2", 00:16:53.356 "allow_unrecognized_csi": false, 00:16:53.356 "method": "bdev_nvme_attach_controller", 00:16:53.356 "req_id": 1 00:16:53.356 } 00:16:53.356 Got JSON-RPC error response 00:16:53.356 response: 00:16:53.356 { 00:16:53.356 "code": -5, 00:16:53.356 "message": "Input/output error" 00:16:53.356 } 00:16:53.356 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:53.356 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:53.356 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:53.356 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:53.356 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.356 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.356 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.356 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.356 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.356 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.356 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.356 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.356 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:53.356 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:53.356 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:53.356 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:53.356 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.356 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:53.356 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.356 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:53.356 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:53.356 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:53.922 request: 00:16:53.922 { 00:16:53.922 "name": "nvme0", 00:16:53.922 "trtype": "tcp", 00:16:53.922 "traddr": "10.0.0.2", 00:16:53.922 "adrfam": "ipv4", 00:16:53.922 "trsvcid": "4420", 00:16:53.922 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:53.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:53.922 "prchk_reftag": false, 00:16:53.922 "prchk_guard": false, 00:16:53.922 "hdgst": false, 00:16:53.922 "ddgst": false, 00:16:53.922 "dhchap_key": "key1", 00:16:53.922 "dhchap_ctrlr_key": "ckey2", 00:16:53.922 "allow_unrecognized_csi": false, 00:16:53.922 "method": "bdev_nvme_attach_controller", 00:16:53.922 "req_id": 1 00:16:53.922 } 00:16:53.922 Got JSON-RPC error response 00:16:53.922 response: 00:16:53.922 { 00:16:53.922 "code": -5, 00:16:53.922 "message": "Input/output error" 00:16:53.922 } 00:16:53.922 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:53.922 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:53.922 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:53.922 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:53.922 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.922 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.922 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.922 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.922 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:53.922 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.922 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.922 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.922 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.922 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:53.922 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.922 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:53.922 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.922 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:53.922 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.922 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.922 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.922 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.857 request: 00:16:54.857 { 00:16:54.857 "name": "nvme0", 00:16:54.857 "trtype": "tcp", 00:16:54.857 "traddr": "10.0.0.2", 00:16:54.857 "adrfam": "ipv4", 00:16:54.857 "trsvcid": "4420", 00:16:54.857 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:54.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:54.857 "prchk_reftag": false, 00:16:54.857 "prchk_guard": false, 00:16:54.857 "hdgst": false, 00:16:54.857 "ddgst": false, 00:16:54.857 "dhchap_key": "key1", 00:16:54.857 "dhchap_ctrlr_key": "ckey1", 00:16:54.857 "allow_unrecognized_csi": false, 00:16:54.857 "method": "bdev_nvme_attach_controller", 00:16:54.857 "req_id": 1 00:16:54.857 } 00:16:54.857 Got JSON-RPC error response 00:16:54.857 response: 00:16:54.857 { 00:16:54.857 "code": -5, 00:16:54.857 "message": "Input/output error" 00:16:54.857 } 00:16:54.857 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:54.857 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:54.857 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:54.857 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:54.857 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:54.857 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.857 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.857 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.857 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1455779 00:16:54.857 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1455779 ']' 00:16:54.857 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1455779 00:16:54.857 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:54.857 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.857 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1455779 00:16:54.857 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:54.857 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:54.857 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1455779' 00:16:54.857 killing process with pid 1455779 00:16:54.857 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1455779 00:16:54.857 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1455779 00:16:55.116 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:55.116 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:55.116 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:55.116 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.116 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1478638 00:16:55.116 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:55.116 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1478638 00:16:55.116 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1478638 ']' 00:16:55.116 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.116 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.116 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.116 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.116 18:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.373 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.373 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:55.373 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:55.373 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:55.373 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.373 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.373 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:55.373 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1478638 00:16:55.373 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1478638 ']' 00:16:55.373 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.373 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.374 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.374 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.374 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.632 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.632 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:55.632 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:55.632 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.632 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.632 null0 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cDc 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.IPv ]] 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IPv 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.JFB 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.25T ]] 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.25T 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.wmj 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.890 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.wge ]] 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wge 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.b5b 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.891 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:57.271 nvme0n1 00:16:57.271 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.271 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.271 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.530 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.530 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.530 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.530 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.530 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.530 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.530 { 00:16:57.530 "cntlid": 1, 00:16:57.530 "qid": 0, 00:16:57.530 "state": "enabled", 00:16:57.530 "thread": "nvmf_tgt_poll_group_000", 00:16:57.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:57.530 "listen_address": { 00:16:57.530 "trtype": "TCP", 00:16:57.530 "adrfam": "IPv4", 00:16:57.530 "traddr": "10.0.0.2", 00:16:57.530 "trsvcid": "4420" 00:16:57.530 }, 00:16:57.530 "peer_address": { 00:16:57.530 "trtype": "TCP", 00:16:57.530 "adrfam": "IPv4", 00:16:57.530 "traddr": "10.0.0.1", 00:16:57.530 "trsvcid": "60162" 00:16:57.530 }, 00:16:57.530 "auth": { 00:16:57.530 "state": "completed", 00:16:57.530 "digest": "sha512", 00:16:57.530 "dhgroup": "ffdhe8192" 00:16:57.530 } 00:16:57.530 } 00:16:57.530 ]' 00:16:57.530 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.530 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.530 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.530 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.530 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.530 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.530 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.530 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.100 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:16:58.100 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:16:59.034 18:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.034 18:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.034 18:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.034 18:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.034 18:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.034 18:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:59.034 18:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.034 18:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.034 18:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.034 18:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:59.034 18:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:59.034 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:59.034 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:59.034 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:59.034 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:59.034 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.034 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:59.034 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.034 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:59.034 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.034 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.293 request: 00:16:59.293 { 00:16:59.293 "name": "nvme0", 00:16:59.293 "trtype": "tcp", 00:16:59.293 "traddr": "10.0.0.2", 00:16:59.293 "adrfam": "ipv4", 00:16:59.293 "trsvcid": "4420", 00:16:59.293 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:59.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:59.293 "prchk_reftag": false, 00:16:59.293 "prchk_guard": false, 00:16:59.293 "hdgst": false, 00:16:59.293 "ddgst": false, 00:16:59.293 "dhchap_key": "key3", 00:16:59.293 "allow_unrecognized_csi": false, 00:16:59.293 "method": "bdev_nvme_attach_controller", 00:16:59.293 "req_id": 1 00:16:59.293 } 00:16:59.293 Got JSON-RPC error response 00:16:59.293 response: 00:16:59.293 { 00:16:59.293 "code": -5, 00:16:59.293 "message": "Input/output error" 00:16:59.293 } 00:16:59.551 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:59.551 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:59.551 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:59.551 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:59.551 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:59.551 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:59.551 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:59.551 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:59.808 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:59.808 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:59.808 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:59.808 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:59.808 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.808 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:59.808 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.808 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:59.808 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.808 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.066 request: 00:17:00.066 { 00:17:00.066 "name": "nvme0", 00:17:00.066 "trtype": "tcp", 00:17:00.066 "traddr": "10.0.0.2", 00:17:00.066 "adrfam": "ipv4", 00:17:00.066 "trsvcid": "4420", 00:17:00.066 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:00.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:00.066 "prchk_reftag": false, 00:17:00.066 "prchk_guard": false, 00:17:00.066 "hdgst": false, 00:17:00.066 "ddgst": false, 00:17:00.066 "dhchap_key": "key3", 00:17:00.066 "allow_unrecognized_csi": false, 00:17:00.066 "method": "bdev_nvme_attach_controller", 00:17:00.066 "req_id": 1 00:17:00.066 } 00:17:00.066 Got JSON-RPC error response 00:17:00.066 response: 00:17:00.066 { 00:17:00.066 "code": -5, 00:17:00.066 "message": "Input/output error" 00:17:00.066 } 00:17:00.066 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:00.066 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:00.066 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:00.066 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:00.066 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:00.066 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:00.066 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:00.066 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:00.066 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:00.066 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:00.324 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.324 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.324 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.324 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.324 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.324 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.324 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.324 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.324 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:00.324 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:00.324 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:00.324 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:00.324 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:00.325 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:00.325 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:00.325 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:00.325 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:00.325 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:00.890 request: 00:17:00.890 { 00:17:00.890 "name": "nvme0", 00:17:00.890 "trtype": "tcp", 00:17:00.890 "traddr": "10.0.0.2", 00:17:00.890 "adrfam": "ipv4", 00:17:00.890 "trsvcid": "4420", 00:17:00.890 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:00.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:00.890 "prchk_reftag": false, 00:17:00.890 "prchk_guard": false, 00:17:00.890 "hdgst": false, 00:17:00.890 "ddgst": false, 00:17:00.890 "dhchap_key": "key0", 00:17:00.890 "dhchap_ctrlr_key": "key1", 00:17:00.890 "allow_unrecognized_csi": false, 00:17:00.890 "method": "bdev_nvme_attach_controller", 00:17:00.890 "req_id": 1 00:17:00.890 } 00:17:00.890 Got JSON-RPC error response 00:17:00.890 response: 00:17:00.890 { 00:17:00.890 "code": -5, 00:17:00.890 "message": "Input/output error" 00:17:00.890 } 00:17:00.890 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:00.890 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:00.890 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:00.890 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:00.890 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:00.890 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:00.890 18:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:01.148 nvme0n1 00:17:01.148 18:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:01.148 18:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:01.148 18:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.406 18:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.406 18:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.406 18:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.664 18:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:01.664 18:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.664 18:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.664 18:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.664 18:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:01.664 18:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:01.664 18:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:03.039 nvme0n1 00:17:03.039 18:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:03.039 18:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:03.039 18:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.297 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.297 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:03.297 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.297 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.297 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.297 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:03.297 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:03.297 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.555 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.555 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:17:03.555 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: --dhchap-ctrl-secret DHHC-1:03:NjA4M2FmNjQxMWY3ZjIyZjBkZGM2ZDMyOTExODNiNTdjYjQ4NDFjY2I3MTQ0NTlhYTAyYjUzZmI4YzJjNzgwN14gqqs=: 00:17:04.491 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:04.491 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:04.491 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:04.491 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:04.491 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:04.491 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:04.491 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:04.491 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.491 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.750 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:04.750 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:04.750 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:04.750 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:04.750 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.750 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:04.750 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.750 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:04.750 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:04.750 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:05.683 request: 00:17:05.683 { 00:17:05.683 "name": "nvme0", 00:17:05.683 "trtype": "tcp", 00:17:05.683 "traddr": "10.0.0.2", 00:17:05.683 "adrfam": "ipv4", 00:17:05.683 "trsvcid": "4420", 00:17:05.683 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:05.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:05.683 "prchk_reftag": false, 00:17:05.683 "prchk_guard": false, 00:17:05.683 "hdgst": false, 00:17:05.683 "ddgst": false, 00:17:05.683 "dhchap_key": "key1", 00:17:05.683 "allow_unrecognized_csi": false, 00:17:05.683 "method": "bdev_nvme_attach_controller", 00:17:05.683 "req_id": 1 00:17:05.683 } 00:17:05.683 Got JSON-RPC error response 00:17:05.683 response: 00:17:05.683 { 00:17:05.683 "code": -5, 00:17:05.683 "message": "Input/output error" 00:17:05.683 } 00:17:05.683 18:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:05.683 18:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:05.683 18:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:05.683 18:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:05.683 18:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:05.683 18:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:05.683 18:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:07.062 nvme0n1 00:17:07.062 18:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:07.062 18:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:07.062 18:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.320 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.320 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.320 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.578 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.578 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.578 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.578 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.578 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:07.578 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:07.578 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:07.836 nvme0n1 00:17:07.836 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:07.836 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:07.836 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.094 18:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.094 18:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.094 18:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.352 18:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:08.352 18:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.352 18:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.352 18:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.352 18:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: '' 2s 00:17:08.352 18:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:08.352 18:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:08.352 18:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: 00:17:08.352 18:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:08.352 18:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:08.352 18:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:08.352 18:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: ]] 00:17:08.352 18:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OTZkYWZiNDJjODAwZjg1NDgyMmNlOGE4NDcyMGEwOTWCxdEb: 00:17:08.352 18:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:08.352 18:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:08.352 18:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: 2s 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: ]] 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NTFhNDRlY2ZjMmMwNWY0Y2ZjNzdiMzAzMTdlMTIyNTA4NzUxMDRiZjE2YzNkZDdj8CdsAQ==: 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:10.892 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:12.847 18:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:12.847 18:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:12.847 18:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:12.847 18:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:12.847 18:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:12.847 18:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:12.847 18:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:12.847 18:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.847 18:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:12.847 18:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.847 18:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.847 18:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.847 18:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:12.847 18:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:12.847 18:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:14.224 nvme0n1 00:17:14.224 18:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:14.224 18:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.224 18:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.224 18:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.224 18:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:14.224 18:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:14.791 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:14.791 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:14.791 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.050 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.050 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:15.050 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.050 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.050 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.050 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:15.050 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:15.308 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:15.308 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:15.308 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.566 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.566 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:15.566 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.566 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.566 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.566 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:15.566 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:15.566 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:15.566 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:15.566 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.566 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:15.566 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.566 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:15.566 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:16.500 request: 00:17:16.500 { 00:17:16.500 "name": "nvme0", 00:17:16.500 "dhchap_key": "key1", 00:17:16.500 "dhchap_ctrlr_key": "key3", 00:17:16.500 "method": "bdev_nvme_set_keys", 00:17:16.500 "req_id": 1 00:17:16.500 } 00:17:16.500 Got JSON-RPC error response 00:17:16.500 response: 00:17:16.500 { 00:17:16.500 "code": -13, 00:17:16.500 "message": "Permission denied" 00:17:16.500 } 00:17:16.500 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:16.500 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:16.500 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:16.500 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:16.500 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:16.500 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.500 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:16.759 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:16.759 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:17.697 18:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:17.697 18:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:17.698 18:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.956 18:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:17.956 18:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:17.956 18:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.956 18:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.956 18:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.956 18:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:17.956 18:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:17.956 18:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:19.336 nvme0n1 00:17:19.336 18:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:19.336 18:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.336 18:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.336 18:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.336 18:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:19.336 18:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:19.336 18:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:19.336 18:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:19.336 18:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.336 18:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:19.336 18:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.336 18:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:19.336 18:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:20.272 request: 00:17:20.272 { 00:17:20.272 "name": "nvme0", 00:17:20.272 "dhchap_key": "key2", 00:17:20.272 "dhchap_ctrlr_key": "key0", 00:17:20.272 "method": "bdev_nvme_set_keys", 00:17:20.272 "req_id": 1 00:17:20.272 } 00:17:20.272 Got JSON-RPC error response 00:17:20.272 response: 00:17:20.272 { 00:17:20.272 "code": -13, 00:17:20.272 "message": "Permission denied" 00:17:20.272 } 00:17:20.272 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:20.272 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.272 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.272 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.272 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:20.272 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:20.272 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.532 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:20.532 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:21.471 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:21.471 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:21.471 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.730 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:21.730 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:21.730 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:21.730 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1455824 00:17:21.730 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1455824 ']' 00:17:21.730 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1455824 00:17:21.730 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:21.730 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.730 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1455824 00:17:21.730 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:21.730 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:21.730 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1455824' 00:17:21.730 killing process with pid 1455824 00:17:21.730 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1455824 00:17:21.730 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1455824 00:17:22.300 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:22.300 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:22.300 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:22.300 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:22.300 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:22.300 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:22.300 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:22.300 rmmod nvme_tcp 00:17:22.300 rmmod nvme_fabrics 00:17:22.300 rmmod nvme_keyring 00:17:22.300 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:22.300 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:22.300 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:22.300 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1478638 ']' 00:17:22.300 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1478638 00:17:22.300 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1478638 ']' 00:17:22.300 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1478638 00:17:22.300 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:22.300 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.300 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1478638 00:17:22.300 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:22.300 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:22.300 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1478638' 00:17:22.301 killing process with pid 1478638 00:17:22.301 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1478638 00:17:22.301 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1478638 00:17:22.560 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:22.560 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:22.560 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:22.560 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:22.560 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:22.560 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:22.560 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:22.560 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:22.560 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:22.560 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.560 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:22.560 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.099 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:25.099 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.cDc /tmp/spdk.key-sha256.JFB /tmp/spdk.key-sha384.wmj /tmp/spdk.key-sha512.b5b /tmp/spdk.key-sha512.IPv /tmp/spdk.key-sha384.25T /tmp/spdk.key-sha256.wge '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:25.099 00:17:25.099 real 3m31.704s 00:17:25.099 user 8m17.578s 00:17:25.099 sys 0m27.736s 00:17:25.099 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.099 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.099 ************************************ 00:17:25.099 END TEST nvmf_auth_target 00:17:25.100 ************************************ 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:25.100 ************************************ 00:17:25.100 START TEST nvmf_bdevio_no_huge 00:17:25.100 ************************************ 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:25.100 * Looking for test storage... 00:17:25.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:25.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.100 --rc genhtml_branch_coverage=1 00:17:25.100 --rc genhtml_function_coverage=1 00:17:25.100 --rc genhtml_legend=1 00:17:25.100 --rc geninfo_all_blocks=1 00:17:25.100 --rc geninfo_unexecuted_blocks=1 00:17:25.100 00:17:25.100 ' 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:25.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.100 --rc genhtml_branch_coverage=1 00:17:25.100 --rc genhtml_function_coverage=1 00:17:25.100 --rc genhtml_legend=1 00:17:25.100 --rc geninfo_all_blocks=1 00:17:25.100 --rc geninfo_unexecuted_blocks=1 00:17:25.100 00:17:25.100 ' 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:25.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.100 --rc genhtml_branch_coverage=1 00:17:25.100 --rc genhtml_function_coverage=1 00:17:25.100 --rc genhtml_legend=1 00:17:25.100 --rc geninfo_all_blocks=1 00:17:25.100 --rc geninfo_unexecuted_blocks=1 00:17:25.100 00:17:25.100 ' 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:25.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.100 --rc genhtml_branch_coverage=1 00:17:25.100 --rc genhtml_function_coverage=1 00:17:25.100 --rc genhtml_legend=1 00:17:25.100 --rc geninfo_all_blocks=1 00:17:25.100 --rc geninfo_unexecuted_blocks=1 00:17:25.100 00:17:25.100 ' 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:25.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.100 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:25.101 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:25.101 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:25.101 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.101 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.101 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.101 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:25.101 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:25.101 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:25.101 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:27.007 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:27.007 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:27.008 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:27.008 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:27.008 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:27.008 18:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:27.008 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:27.008 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:27.008 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:27.008 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:27.266 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:27.266 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:27.266 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:27.266 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:27.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:17:27.266 00:17:27.266 --- 10.0.0.2 ping statistics --- 00:17:27.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.266 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:17:27.266 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:27.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:17:27.266 00:17:27.266 --- 10.0.0.1 ping statistics --- 00:17:27.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.266 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:17:27.266 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.266 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:27.266 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:27.266 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.266 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:27.266 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:27.267 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.267 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:27.267 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:27.267 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:27.267 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:27.267 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:27.267 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:27.267 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1483776 00:17:27.267 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:27.267 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1483776 00:17:27.267 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1483776 ']' 00:17:27.267 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.267 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.267 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.267 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.267 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:27.267 [2024-12-09 18:06:50.166480] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:17:27.267 [2024-12-09 18:06:50.166603] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:27.267 [2024-12-09 18:06:50.249064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:27.525 [2024-12-09 18:06:50.311456] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.525 [2024-12-09 18:06:50.311516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.525 [2024-12-09 18:06:50.311549] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.525 [2024-12-09 18:06:50.311562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.525 [2024-12-09 18:06:50.311571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.525 [2024-12-09 18:06:50.312568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:27.525 [2024-12-09 18:06:50.312668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:27.525 [2024-12-09 18:06:50.312733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:27.525 [2024-12-09 18:06:50.312737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:27.525 [2024-12-09 18:06:50.474670] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:27.525 Malloc0 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:27.525 [2024-12-09 18:06:50.513577] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:27.525 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:27.525 { 00:17:27.525 "params": { 00:17:27.525 "name": "Nvme$subsystem", 00:17:27.525 "trtype": "$TEST_TRANSPORT", 00:17:27.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:27.525 "adrfam": "ipv4", 00:17:27.525 "trsvcid": "$NVMF_PORT", 00:17:27.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:27.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:27.526 "hdgst": ${hdgst:-false}, 00:17:27.526 "ddgst": ${ddgst:-false} 00:17:27.526 }, 00:17:27.526 "method": "bdev_nvme_attach_controller" 00:17:27.526 } 00:17:27.526 EOF 00:17:27.526 )") 00:17:27.526 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:27.526 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:27.526 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:27.526 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:27.526 "params": { 00:17:27.526 "name": "Nvme1", 00:17:27.526 "trtype": "tcp", 00:17:27.526 "traddr": "10.0.0.2", 00:17:27.526 "adrfam": "ipv4", 00:17:27.526 "trsvcid": "4420", 00:17:27.526 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:27.526 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:27.526 "hdgst": false, 00:17:27.526 "ddgst": false 00:17:27.526 }, 00:17:27.526 "method": "bdev_nvme_attach_controller" 00:17:27.526 }' 00:17:27.526 [2024-12-09 18:06:50.564598] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:17:27.526 [2024-12-09 18:06:50.564682] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1483921 ] 00:17:27.786 [2024-12-09 18:06:50.639127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:27.786 [2024-12-09 18:06:50.703443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.786 [2024-12-09 18:06:50.703496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.786 [2024-12-09 18:06:50.703499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.046 I/O targets: 00:17:28.046 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:28.046 00:17:28.046 00:17:28.046 CUnit - A unit testing framework for C - Version 2.1-3 00:17:28.046 http://cunit.sourceforge.net/ 00:17:28.046 00:17:28.046 00:17:28.046 Suite: bdevio tests on: Nvme1n1 00:17:28.046 Test: blockdev write read block ...passed 00:17:28.046 Test: blockdev write zeroes read block ...passed 00:17:28.046 Test: blockdev write zeroes read no split ...passed 00:17:28.046 Test: blockdev write zeroes read split ...passed 00:17:28.046 Test: blockdev write zeroes read split partial ...passed 00:17:28.046 Test: blockdev reset ...[2024-12-09 18:06:51.056877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:28.046 [2024-12-09 18:06:51.057001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19382b0 (9): Bad file descriptor 00:17:28.305 [2024-12-09 18:06:51.207484] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:28.305 passed 00:17:28.305 Test: blockdev write read 8 blocks ...passed 00:17:28.305 Test: blockdev write read size > 128k ...passed 00:17:28.305 Test: blockdev write read invalid size ...passed 00:17:28.305 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:28.305 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:28.305 Test: blockdev write read max offset ...passed 00:17:28.305 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:28.565 Test: blockdev writev readv 8 blocks ...passed 00:17:28.565 Test: blockdev writev readv 30 x 1block ...passed 00:17:28.565 Test: blockdev writev readv block ...passed 00:17:28.565 Test: blockdev writev readv size > 128k ...passed 00:17:28.565 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:28.565 Test: blockdev comparev and writev ...[2024-12-09 18:06:51.420755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.565 [2024-12-09 18:06:51.420794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.565 [2024-12-09 18:06:51.420840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.565 [2024-12-09 18:06:51.420870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:28.565 [2024-12-09 18:06:51.421238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.565 [2024-12-09 18:06:51.421267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:28.565 [2024-12-09 18:06:51.421302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.565 [2024-12-09 18:06:51.421330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:28.565 [2024-12-09 18:06:51.421697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.565 [2024-12-09 18:06:51.421724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:28.565 [2024-12-09 18:06:51.421758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.565 [2024-12-09 18:06:51.421792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:28.565 [2024-12-09 18:06:51.422148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.565 [2024-12-09 18:06:51.422175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:28.565 [2024-12-09 18:06:51.422209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.565 [2024-12-09 18:06:51.422236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:28.565 passed 00:17:28.565 Test: blockdev nvme passthru rw ...passed 00:17:28.565 Test: blockdev nvme passthru vendor specific ...[2024-12-09 18:06:51.505784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:28.565 [2024-12-09 18:06:51.505813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:28.565 [2024-12-09 18:06:51.505975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:28.565 [2024-12-09 18:06:51.506001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:28.565 [2024-12-09 18:06:51.506163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:28.565 [2024-12-09 18:06:51.506189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:28.565 [2024-12-09 18:06:51.506343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:28.565 [2024-12-09 18:06:51.506369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:28.565 passed 00:17:28.565 Test: blockdev nvme admin passthru ...passed 00:17:28.565 Test: blockdev copy ...passed 00:17:28.565 00:17:28.565 Run Summary: Type Total Ran Passed Failed Inactive 00:17:28.565 suites 1 1 n/a 0 0 00:17:28.565 tests 23 23 23 0 0 00:17:28.565 asserts 152 152 152 0 n/a 00:17:28.565 00:17:28.565 Elapsed time = 1.242 seconds 00:17:29.134 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:29.134 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.134 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:29.134 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.134 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:29.134 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:29.134 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:29.134 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:29.134 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:29.134 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:29.134 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:29.134 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:29.134 rmmod nvme_tcp 00:17:29.134 rmmod nvme_fabrics 00:17:29.134 rmmod nvme_keyring 00:17:29.134 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:29.134 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:29.134 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:29.134 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1483776 ']' 00:17:29.134 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1483776 00:17:29.134 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1483776 ']' 00:17:29.134 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1483776 00:17:29.134 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:29.134 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.134 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1483776 00:17:29.134 18:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:29.134 18:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:29.134 18:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1483776' 00:17:29.134 killing process with pid 1483776 00:17:29.134 18:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1483776 00:17:29.134 18:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1483776 00:17:29.392 18:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:29.392 18:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:29.392 18:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:29.392 18:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:29.392 18:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:29.392 18:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:29.392 18:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:29.392 18:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:29.392 18:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:29.392 18:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.392 18:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.392 18:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:31.931 00:17:31.931 real 0m6.854s 00:17:31.931 user 0m11.263s 00:17:31.931 sys 0m2.712s 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:31.931 ************************************ 00:17:31.931 END TEST nvmf_bdevio_no_huge 00:17:31.931 ************************************ 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:31.931 ************************************ 00:17:31.931 START TEST nvmf_tls 00:17:31.931 ************************************ 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:31.931 * Looking for test storage... 00:17:31.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:31.931 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:31.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.931 --rc genhtml_branch_coverage=1 00:17:31.931 --rc genhtml_function_coverage=1 00:17:31.931 --rc genhtml_legend=1 00:17:31.931 --rc geninfo_all_blocks=1 00:17:31.931 --rc geninfo_unexecuted_blocks=1 00:17:31.932 00:17:31.932 ' 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:31.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.932 --rc genhtml_branch_coverage=1 00:17:31.932 --rc genhtml_function_coverage=1 00:17:31.932 --rc genhtml_legend=1 00:17:31.932 --rc geninfo_all_blocks=1 00:17:31.932 --rc geninfo_unexecuted_blocks=1 00:17:31.932 00:17:31.932 ' 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:31.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.932 --rc genhtml_branch_coverage=1 00:17:31.932 --rc genhtml_function_coverage=1 00:17:31.932 --rc genhtml_legend=1 00:17:31.932 --rc geninfo_all_blocks=1 00:17:31.932 --rc geninfo_unexecuted_blocks=1 00:17:31.932 00:17:31.932 ' 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:31.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.932 --rc genhtml_branch_coverage=1 00:17:31.932 --rc genhtml_function_coverage=1 00:17:31.932 --rc genhtml_legend=1 00:17:31.932 --rc geninfo_all_blocks=1 00:17:31.932 --rc geninfo_unexecuted_blocks=1 00:17:31.932 00:17:31.932 ' 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:31.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:31.932 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.837 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:33.837 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:33.837 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:33.837 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:33.838 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:33.838 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:33.838 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:33.838 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:33.838 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:34.097 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:34.097 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:34.097 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:34.097 18:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:34.097 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:34.097 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:34.097 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:34.097 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:34.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:17:34.097 00:17:34.097 --- 10.0.0.2 ping statistics --- 00:17:34.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.097 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:17:34.097 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:34.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:17:34.097 00:17:34.097 --- 10.0.0.1 ping statistics --- 00:17:34.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.097 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:17:34.097 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.097 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:34.097 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:34.097 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.097 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:34.097 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:34.097 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.097 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:34.097 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:34.097 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:34.097 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:34.097 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:34.097 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:34.097 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1486031 00:17:34.097 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:34.098 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1486031 00:17:34.098 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1486031 ']' 00:17:34.098 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.098 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.098 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.098 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.098 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:34.098 [2024-12-09 18:06:57.106873] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:17:34.098 [2024-12-09 18:06:57.106963] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.355 [2024-12-09 18:06:57.187230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.355 [2024-12-09 18:06:57.242951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.355 [2024-12-09 18:06:57.243005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.355 [2024-12-09 18:06:57.243020] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.355 [2024-12-09 18:06:57.243031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.355 [2024-12-09 18:06:57.243042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.355 [2024-12-09 18:06:57.243659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.355 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.355 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:34.355 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:34.355 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:34.355 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:34.355 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.355 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:34.355 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:34.613 true 00:17:34.613 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:34.613 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:34.872 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:34.872 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:34.872 18:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:35.440 18:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:35.440 18:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:35.440 18:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:35.440 18:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:35.440 18:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:35.699 18:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:35.699 18:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:35.956 18:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:35.957 18:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:35.957 18:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:35.957 18:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:36.524 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:36.524 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:36.524 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:36.524 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:36.524 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:36.783 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:36.783 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:36.783 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:37.351 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:37.351 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:37.351 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:37.351 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:37.351 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:37.351 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:37.351 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:37.351 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:37.351 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:37.351 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:37.351 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:37.609 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:37.609 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:37.609 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:37.609 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:37.609 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:37.609 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:37.609 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:37.609 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:37.609 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:37.609 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:37.609 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.TIlfI4CNX4 00:17:37.609 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:37.609 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.RZ9wBSz2nP 00:17:37.609 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:37.609 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:37.609 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.TIlfI4CNX4 00:17:37.609 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.RZ9wBSz2nP 00:17:37.609 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:37.867 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:38.125 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.TIlfI4CNX4 00:17:38.125 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TIlfI4CNX4 00:17:38.125 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:38.383 [2024-12-09 18:07:01.346042] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.383 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:38.641 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:38.899 [2024-12-09 18:07:01.875471] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:38.899 [2024-12-09 18:07:01.875761] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.899 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:39.157 malloc0 00:17:39.157 18:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:39.417 18:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TIlfI4CNX4 00:17:39.705 18:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:39.964 18:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.TIlfI4CNX4 00:17:52.176 Initializing NVMe Controllers 00:17:52.176 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:52.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:52.176 Initialization complete. Launching workers. 00:17:52.176 ======================================================== 00:17:52.176 Latency(us) 00:17:52.176 Device Information : IOPS MiB/s Average min max 00:17:52.176 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8784.12 34.31 7287.76 1055.26 11817.77 00:17:52.176 ======================================================== 00:17:52.176 Total : 8784.12 34.31 7287.76 1055.26 11817.77 00:17:52.176 00:17:52.176 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TIlfI4CNX4 00:17:52.176 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:52.176 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:52.176 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:52.177 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TIlfI4CNX4 00:17:52.177 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:52.177 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1487958 00:17:52.177 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:52.177 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:52.177 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1487958 /var/tmp/bdevperf.sock 00:17:52.177 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1487958 ']' 00:17:52.177 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.177 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.177 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.177 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.177 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.177 [2024-12-09 18:07:13.144101] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:17:52.177 [2024-12-09 18:07:13.144197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487958 ] 00:17:52.177 [2024-12-09 18:07:13.222300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.177 [2024-12-09 18:07:13.284751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.177 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:52.177 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:52.177 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TIlfI4CNX4 00:17:52.177 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:52.177 [2024-12-09 18:07:13.951964] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:52.177 TLSTESTn1 00:17:52.177 18:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:52.177 Running I/O for 10 seconds... 00:17:53.557 3340.00 IOPS, 13.05 MiB/s [2024-12-09T17:07:17.535Z] 3402.50 IOPS, 13.29 MiB/s [2024-12-09T17:07:18.471Z] 3430.33 IOPS, 13.40 MiB/s [2024-12-09T17:07:19.410Z] 3410.00 IOPS, 13.32 MiB/s [2024-12-09T17:07:20.345Z] 3402.80 IOPS, 13.29 MiB/s [2024-12-09T17:07:21.285Z] 3407.50 IOPS, 13.31 MiB/s [2024-12-09T17:07:22.225Z] 3418.71 IOPS, 13.35 MiB/s [2024-12-09T17:07:23.604Z] 3425.62 IOPS, 13.38 MiB/s [2024-12-09T17:07:24.544Z] 3428.67 IOPS, 13.39 MiB/s [2024-12-09T17:07:24.544Z] 3434.90 IOPS, 13.42 MiB/s 00:18:01.503 Latency(us) 00:18:01.503 [2024-12-09T17:07:24.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.503 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:01.503 Verification LBA range: start 0x0 length 0x2000 00:18:01.503 TLSTESTn1 : 10.04 3434.43 13.42 0.00 0.00 37182.69 11019.76 39418.69 00:18:01.503 [2024-12-09T17:07:24.544Z] =================================================================================================================== 00:18:01.503 [2024-12-09T17:07:24.544Z] Total : 3434.43 13.42 0.00 0.00 37182.69 11019.76 39418.69 00:18:01.503 { 00:18:01.503 "results": [ 00:18:01.503 { 00:18:01.503 "job": "TLSTESTn1", 00:18:01.503 "core_mask": "0x4", 00:18:01.503 "workload": "verify", 00:18:01.503 "status": "finished", 00:18:01.503 "verify_range": { 00:18:01.503 "start": 0, 00:18:01.503 "length": 8192 00:18:01.503 }, 00:18:01.503 "queue_depth": 128, 00:18:01.503 "io_size": 4096, 00:18:01.503 "runtime": 10.038045, 00:18:01.503 "iops": 3434.4336969997644, 00:18:01.503 "mibps": 13.41575662890533, 00:18:01.503 "io_failed": 0, 00:18:01.503 "io_timeout": 0, 00:18:01.503 "avg_latency_us": 37182.69350823195, 00:18:01.503 "min_latency_us": 11019.757037037038, 00:18:01.503 "max_latency_us": 39418.69037037037 00:18:01.503 } 00:18:01.503 ], 00:18:01.503 "core_count": 1 00:18:01.503 } 00:18:01.503 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:01.503 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1487958 00:18:01.503 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1487958 ']' 00:18:01.503 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1487958 00:18:01.503 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:01.503 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.503 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1487958 00:18:01.503 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:01.503 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:01.503 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1487958' 00:18:01.503 killing process with pid 1487958 00:18:01.503 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1487958 00:18:01.503 Received shutdown signal, test time was about 10.000000 seconds 00:18:01.503 00:18:01.503 Latency(us) 00:18:01.503 [2024-12-09T17:07:24.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.503 [2024-12-09T17:07:24.544Z] =================================================================================================================== 00:18:01.503 [2024-12-09T17:07:24.545Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1487958 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RZ9wBSz2nP 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RZ9wBSz2nP 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RZ9wBSz2nP 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.RZ9wBSz2nP 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1489273 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1489273 /var/tmp/bdevperf.sock 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1489273 ']' 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.504 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.764 [2024-12-09 18:07:24.568378] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:18:01.764 [2024-12-09 18:07:24.568468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489273 ] 00:18:01.764 [2024-12-09 18:07:24.639506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.764 [2024-12-09 18:07:24.698901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.022 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.022 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:02.022 18:07:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RZ9wBSz2nP 00:18:02.279 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:02.539 [2024-12-09 18:07:25.325857] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:02.539 [2024-12-09 18:07:25.334333] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:02.539 [2024-12-09 18:07:25.335047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2af30 (107): Transport endpoint is not connected 00:18:02.539 [2024-12-09 18:07:25.336033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2af30 (9): Bad file descriptor 00:18:02.539 [2024-12-09 18:07:25.337034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:02.539 [2024-12-09 18:07:25.337057] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:02.539 [2024-12-09 18:07:25.337080] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:02.539 [2024-12-09 18:07:25.337110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:02.539 request: 00:18:02.539 { 00:18:02.539 "name": "TLSTEST", 00:18:02.539 "trtype": "tcp", 00:18:02.539 "traddr": "10.0.0.2", 00:18:02.539 "adrfam": "ipv4", 00:18:02.539 "trsvcid": "4420", 00:18:02.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.539 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:02.539 "prchk_reftag": false, 00:18:02.539 "prchk_guard": false, 00:18:02.539 "hdgst": false, 00:18:02.539 "ddgst": false, 00:18:02.539 "psk": "key0", 00:18:02.539 "allow_unrecognized_csi": false, 00:18:02.539 "method": "bdev_nvme_attach_controller", 00:18:02.539 "req_id": 1 00:18:02.539 } 00:18:02.539 Got JSON-RPC error response 00:18:02.539 response: 00:18:02.539 { 00:18:02.539 "code": -5, 00:18:02.539 "message": "Input/output error" 00:18:02.539 } 00:18:02.539 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1489273 00:18:02.539 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1489273 ']' 00:18:02.539 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1489273 00:18:02.539 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:02.539 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:02.539 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1489273 00:18:02.539 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:02.539 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:02.539 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1489273' 00:18:02.539 killing process with pid 1489273 00:18:02.539 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1489273 00:18:02.539 Received shutdown signal, test time was about 10.000000 seconds 00:18:02.539 00:18:02.539 Latency(us) 00:18:02.539 [2024-12-09T17:07:25.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.539 [2024-12-09T17:07:25.580Z] =================================================================================================================== 00:18:02.539 [2024-12-09T17:07:25.580Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:02.539 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1489273 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TIlfI4CNX4 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TIlfI4CNX4 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TIlfI4CNX4 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TIlfI4CNX4 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1489380 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1489380 /var/tmp/bdevperf.sock 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1489380 ']' 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.798 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.798 [2024-12-09 18:07:25.661050] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:18:02.798 [2024-12-09 18:07:25.661126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489380 ] 00:18:02.798 [2024-12-09 18:07:25.728448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.798 [2024-12-09 18:07:25.785098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.056 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.056 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:03.056 18:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TIlfI4CNX4 00:18:03.314 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:03.572 [2024-12-09 18:07:26.502984] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:03.572 [2024-12-09 18:07:26.508413] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:03.572 [2024-12-09 18:07:26.508447] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:03.572 [2024-12-09 18:07:26.508484] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:03.572 [2024-12-09 18:07:26.509037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c3f30 (107): Transport endpoint is not connected 00:18:03.572 [2024-12-09 18:07:26.510024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c3f30 (9): Bad file descriptor 00:18:03.572 [2024-12-09 18:07:26.511023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:03.572 [2024-12-09 18:07:26.511046] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:03.572 [2024-12-09 18:07:26.511068] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:03.572 [2024-12-09 18:07:26.511107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:03.572 request: 00:18:03.572 { 00:18:03.572 "name": "TLSTEST", 00:18:03.572 "trtype": "tcp", 00:18:03.572 "traddr": "10.0.0.2", 00:18:03.572 "adrfam": "ipv4", 00:18:03.572 "trsvcid": "4420", 00:18:03.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.572 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:03.572 "prchk_reftag": false, 00:18:03.572 "prchk_guard": false, 00:18:03.572 "hdgst": false, 00:18:03.572 "ddgst": false, 00:18:03.572 "psk": "key0", 00:18:03.572 "allow_unrecognized_csi": false, 00:18:03.572 "method": "bdev_nvme_attach_controller", 00:18:03.572 "req_id": 1 00:18:03.572 } 00:18:03.572 Got JSON-RPC error response 00:18:03.572 response: 00:18:03.572 { 00:18:03.572 "code": -5, 00:18:03.573 "message": "Input/output error" 00:18:03.573 } 00:18:03.573 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1489380 00:18:03.573 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1489380 ']' 00:18:03.573 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1489380 00:18:03.573 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:03.573 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:03.573 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1489380 00:18:03.573 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:03.573 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:03.573 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1489380' 00:18:03.573 killing process with pid 1489380 00:18:03.573 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1489380 00:18:03.573 Received shutdown signal, test time was about 10.000000 seconds 00:18:03.573 00:18:03.573 Latency(us) 00:18:03.573 [2024-12-09T17:07:26.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.573 [2024-12-09T17:07:26.614Z] =================================================================================================================== 00:18:03.573 [2024-12-09T17:07:26.614Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:03.573 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1489380 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TIlfI4CNX4 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TIlfI4CNX4 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TIlfI4CNX4 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TIlfI4CNX4 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1489538 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1489538 /var/tmp/bdevperf.sock 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1489538 ']' 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:03.831 18:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.831 [2024-12-09 18:07:26.818609] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:18:03.831 [2024-12-09 18:07:26.818689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489538 ] 00:18:04.089 [2024-12-09 18:07:26.890181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.089 [2024-12-09 18:07:26.951667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.089 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:04.089 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:04.089 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TIlfI4CNX4 00:18:04.347 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:04.605 [2024-12-09 18:07:27.579965] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:04.605 [2024-12-09 18:07:27.590090] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:04.605 [2024-12-09 18:07:27.590136] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:04.605 [2024-12-09 18:07:27.590188] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:04.605 [2024-12-09 18:07:27.591113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c9f30 (107): Transport endpoint is not connected 00:18:04.605 [2024-12-09 18:07:27.592101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c9f30 (9): Bad file descriptor 00:18:04.605 [2024-12-09 18:07:27.593102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:04.605 [2024-12-09 18:07:27.593133] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:04.605 [2024-12-09 18:07:27.593155] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:04.605 [2024-12-09 18:07:27.593189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:04.605 request: 00:18:04.605 { 00:18:04.605 "name": "TLSTEST", 00:18:04.605 "trtype": "tcp", 00:18:04.605 "traddr": "10.0.0.2", 00:18:04.605 "adrfam": "ipv4", 00:18:04.605 "trsvcid": "4420", 00:18:04.605 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:04.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.605 "prchk_reftag": false, 00:18:04.605 "prchk_guard": false, 00:18:04.605 "hdgst": false, 00:18:04.605 "ddgst": false, 00:18:04.605 "psk": "key0", 00:18:04.605 "allow_unrecognized_csi": false, 00:18:04.605 "method": "bdev_nvme_attach_controller", 00:18:04.605 "req_id": 1 00:18:04.605 } 00:18:04.605 Got JSON-RPC error response 00:18:04.605 response: 00:18:04.605 { 00:18:04.605 "code": -5, 00:18:04.605 "message": "Input/output error" 00:18:04.605 } 00:18:04.605 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1489538 00:18:04.605 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1489538 ']' 00:18:04.605 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1489538 00:18:04.605 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:04.605 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.605 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1489538 00:18:04.863 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:04.863 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:04.863 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1489538' 00:18:04.863 killing process with pid 1489538 00:18:04.863 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1489538 00:18:04.863 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.863 00:18:04.863 Latency(us) 00:18:04.863 [2024-12-09T17:07:27.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.863 [2024-12-09T17:07:27.904Z] =================================================================================================================== 00:18:04.863 [2024-12-09T17:07:27.904Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1489538 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1489668 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1489668 /var/tmp/bdevperf.sock 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1489668 ']' 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.864 18:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.124 [2024-12-09 18:07:27.927240] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:18:05.124 [2024-12-09 18:07:27.927331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489668 ] 00:18:05.124 [2024-12-09 18:07:28.002733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.124 [2024-12-09 18:07:28.066707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.383 18:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.383 18:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:05.383 18:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:05.641 [2024-12-09 18:07:28.486279] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:05.641 [2024-12-09 18:07:28.486324] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:05.641 request: 00:18:05.641 { 00:18:05.641 "name": "key0", 00:18:05.641 "path": "", 00:18:05.641 "method": "keyring_file_add_key", 00:18:05.641 "req_id": 1 00:18:05.641 } 00:18:05.641 Got JSON-RPC error response 00:18:05.641 response: 00:18:05.641 { 00:18:05.641 "code": -1, 00:18:05.641 "message": "Operation not permitted" 00:18:05.641 } 00:18:05.641 18:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:05.901 [2024-12-09 18:07:28.755113] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:05.901 [2024-12-09 18:07:28.755189] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:05.901 request: 00:18:05.901 { 00:18:05.901 "name": "TLSTEST", 00:18:05.901 "trtype": "tcp", 00:18:05.901 "traddr": "10.0.0.2", 00:18:05.901 "adrfam": "ipv4", 00:18:05.901 "trsvcid": "4420", 00:18:05.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.901 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:05.901 "prchk_reftag": false, 00:18:05.901 "prchk_guard": false, 00:18:05.901 "hdgst": false, 00:18:05.901 "ddgst": false, 00:18:05.901 "psk": "key0", 00:18:05.901 "allow_unrecognized_csi": false, 00:18:05.901 "method": "bdev_nvme_attach_controller", 00:18:05.901 "req_id": 1 00:18:05.901 } 00:18:05.901 Got JSON-RPC error response 00:18:05.901 response: 00:18:05.901 { 00:18:05.901 "code": -126, 00:18:05.901 "message": "Required key not available" 00:18:05.901 } 00:18:05.901 18:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1489668 00:18:05.901 18:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1489668 ']' 00:18:05.901 18:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1489668 00:18:05.901 18:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:05.901 18:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:05.901 18:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1489668 00:18:05.901 18:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:05.901 18:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:05.901 18:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1489668' 00:18:05.901 killing process with pid 1489668 00:18:05.901 18:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1489668 00:18:05.901 Received shutdown signal, test time was about 10.000000 seconds 00:18:05.901 00:18:05.901 Latency(us) 00:18:05.901 [2024-12-09T17:07:28.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.901 [2024-12-09T17:07:28.942Z] =================================================================================================================== 00:18:05.901 [2024-12-09T17:07:28.942Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:05.901 18:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1489668 00:18:06.161 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:06.161 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:06.161 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:06.161 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:06.161 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:06.161 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1486031 00:18:06.161 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1486031 ']' 00:18:06.161 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1486031 00:18:06.161 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:06.161 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.161 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1486031 00:18:06.161 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:06.161 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:06.161 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1486031' 00:18:06.161 killing process with pid 1486031 00:18:06.161 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1486031 00:18:06.161 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1486031 00:18:06.421 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:06.421 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:06.421 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:06.422 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:06.422 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:06.422 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:06.422 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:06.422 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:06.422 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:06.422 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.z7LdbJtCei 00:18:06.422 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:06.422 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.z7LdbJtCei 00:18:06.422 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:06.422 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:06.422 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:06.422 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.422 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1489928 00:18:06.422 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:06.422 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1489928 00:18:06.422 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1489928 ']' 00:18:06.422 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.422 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.422 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.422 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.422 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.422 [2024-12-09 18:07:29.408480] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:18:06.422 [2024-12-09 18:07:29.408606] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.681 [2024-12-09 18:07:29.486294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.681 [2024-12-09 18:07:29.546009] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.681 [2024-12-09 18:07:29.546083] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.681 [2024-12-09 18:07:29.546097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.681 [2024-12-09 18:07:29.546108] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.681 [2024-12-09 18:07:29.546118] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.681 [2024-12-09 18:07:29.546753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.681 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.681 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:06.681 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:06.681 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:06.681 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.681 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.681 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.z7LdbJtCei 00:18:06.681 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z7LdbJtCei 00:18:06.681 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:06.939 [2024-12-09 18:07:29.933188] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.939 18:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:07.507 18:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:07.507 [2024-12-09 18:07:30.534879] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:07.507 [2024-12-09 18:07:30.535162] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:07.767 18:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:08.025 malloc0 00:18:08.026 18:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:08.284 18:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z7LdbJtCei 00:18:08.542 18:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:08.800 18:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z7LdbJtCei 00:18:08.800 18:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:08.800 18:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:08.800 18:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:08.800 18:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.z7LdbJtCei 00:18:08.800 18:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:08.800 18:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1490224 00:18:08.800 18:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:08.800 18:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:08.800 18:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1490224 /var/tmp/bdevperf.sock 00:18:08.800 18:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1490224 ']' 00:18:08.800 18:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.800 18:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.800 18:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.800 18:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.800 18:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.800 [2024-12-09 18:07:31.798695] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:18:08.800 [2024-12-09 18:07:31.798792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490224 ] 00:18:09.058 [2024-12-09 18:07:31.869088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.058 [2024-12-09 18:07:31.926732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.058 18:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.058 18:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:09.058 18:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z7LdbJtCei 00:18:09.315 18:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:09.883 [2024-12-09 18:07:32.623769] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:09.883 TLSTESTn1 00:18:09.883 18:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:09.883 Running I/O for 10 seconds... 00:18:12.201 3024.00 IOPS, 11.81 MiB/s [2024-12-09T17:07:36.178Z] 3103.00 IOPS, 12.12 MiB/s [2024-12-09T17:07:37.199Z] 3163.67 IOPS, 12.36 MiB/s [2024-12-09T17:07:38.156Z] 3175.25 IOPS, 12.40 MiB/s [2024-12-09T17:07:39.092Z] 3185.20 IOPS, 12.44 MiB/s [2024-12-09T17:07:40.029Z] 3195.83 IOPS, 12.48 MiB/s [2024-12-09T17:07:40.969Z] 3215.00 IOPS, 12.56 MiB/s [2024-12-09T17:07:41.904Z] 3205.12 IOPS, 12.52 MiB/s [2024-12-09T17:07:43.280Z] 3214.22 IOPS, 12.56 MiB/s [2024-12-09T17:07:43.281Z] 3222.20 IOPS, 12.59 MiB/s 00:18:20.240 Latency(us) 00:18:20.240 [2024-12-09T17:07:43.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.240 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:20.240 Verification LBA range: start 0x0 length 0x2000 00:18:20.240 TLSTESTn1 : 10.03 3224.98 12.60 0.00 0.00 39605.18 5825.42 52428.80 00:18:20.240 [2024-12-09T17:07:43.281Z] =================================================================================================================== 00:18:20.240 [2024-12-09T17:07:43.281Z] Total : 3224.98 12.60 0.00 0.00 39605.18 5825.42 52428.80 00:18:20.240 { 00:18:20.240 "results": [ 00:18:20.240 { 00:18:20.240 "job": "TLSTESTn1", 00:18:20.240 "core_mask": "0x4", 00:18:20.240 "workload": "verify", 00:18:20.240 "status": "finished", 00:18:20.240 "verify_range": { 00:18:20.240 "start": 0, 00:18:20.240 "length": 8192 00:18:20.240 }, 00:18:20.240 "queue_depth": 128, 00:18:20.240 "io_size": 4096, 00:18:20.240 "runtime": 10.030746, 00:18:20.240 "iops": 3224.984462770765, 00:18:20.240 "mibps": 12.5975955576983, 00:18:20.240 "io_failed": 0, 00:18:20.240 "io_timeout": 0, 00:18:20.240 "avg_latency_us": 39605.17867489178, 00:18:20.240 "min_latency_us": 5825.422222222222, 00:18:20.240 "max_latency_us": 52428.8 00:18:20.240 } 00:18:20.240 ], 00:18:20.240 "core_count": 1 00:18:20.240 } 00:18:20.240 18:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:20.240 18:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1490224 00:18:20.240 18:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1490224 ']' 00:18:20.240 18:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1490224 00:18:20.240 18:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:20.240 18:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.240 18:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1490224 00:18:20.240 18:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:20.240 18:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:20.240 18:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1490224' 00:18:20.240 killing process with pid 1490224 00:18:20.240 18:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1490224 00:18:20.240 Received shutdown signal, test time was about 10.000000 seconds 00:18:20.240 00:18:20.240 Latency(us) 00:18:20.240 [2024-12-09T17:07:43.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.240 [2024-12-09T17:07:43.281Z] =================================================================================================================== 00:18:20.240 [2024-12-09T17:07:43.281Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:20.240 18:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1490224 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.z7LdbJtCei 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z7LdbJtCei 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z7LdbJtCei 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z7LdbJtCei 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.z7LdbJtCei 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1491541 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1491541 /var/tmp/bdevperf.sock 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1491541 ']' 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:20.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.240 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.240 [2024-12-09 18:07:43.228223] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:18:20.240 [2024-12-09 18:07:43.228302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491541 ] 00:18:20.498 [2024-12-09 18:07:43.297370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.498 [2024-12-09 18:07:43.355433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.498 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.498 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:20.498 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z7LdbJtCei 00:18:20.756 [2024-12-09 18:07:43.713082] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.z7LdbJtCei': 0100666 00:18:20.756 [2024-12-09 18:07:43.713128] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:20.756 request: 00:18:20.756 { 00:18:20.756 "name": "key0", 00:18:20.756 "path": "/tmp/tmp.z7LdbJtCei", 00:18:20.756 "method": "keyring_file_add_key", 00:18:20.756 "req_id": 1 00:18:20.756 } 00:18:20.756 Got JSON-RPC error response 00:18:20.756 response: 00:18:20.756 { 00:18:20.756 "code": -1, 00:18:20.756 "message": "Operation not permitted" 00:18:20.756 } 00:18:20.756 18:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:21.014 [2024-12-09 18:07:43.993969] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:21.014 [2024-12-09 18:07:43.994042] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:21.014 request: 00:18:21.014 { 00:18:21.014 "name": "TLSTEST", 00:18:21.014 "trtype": "tcp", 00:18:21.014 "traddr": "10.0.0.2", 00:18:21.014 "adrfam": "ipv4", 00:18:21.014 "trsvcid": "4420", 00:18:21.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.014 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:21.014 "prchk_reftag": false, 00:18:21.014 "prchk_guard": false, 00:18:21.014 "hdgst": false, 00:18:21.014 "ddgst": false, 00:18:21.014 "psk": "key0", 00:18:21.014 "allow_unrecognized_csi": false, 00:18:21.014 "method": "bdev_nvme_attach_controller", 00:18:21.014 "req_id": 1 00:18:21.014 } 00:18:21.014 Got JSON-RPC error response 00:18:21.014 response: 00:18:21.014 { 00:18:21.014 "code": -126, 00:18:21.014 "message": "Required key not available" 00:18:21.014 } 00:18:21.014 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1491541 00:18:21.014 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1491541 ']' 00:18:21.014 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1491541 00:18:21.014 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:21.014 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.014 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1491541 00:18:21.272 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:21.272 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:21.272 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1491541' 00:18:21.272 killing process with pid 1491541 00:18:21.272 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1491541 00:18:21.272 Received shutdown signal, test time was about 10.000000 seconds 00:18:21.272 00:18:21.272 Latency(us) 00:18:21.272 [2024-12-09T17:07:44.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.272 [2024-12-09T17:07:44.313Z] =================================================================================================================== 00:18:21.272 [2024-12-09T17:07:44.313Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:21.272 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1491541 00:18:21.272 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:21.272 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:21.272 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:21.272 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:21.272 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:21.272 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1489928 00:18:21.272 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1489928 ']' 00:18:21.272 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1489928 00:18:21.272 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:21.272 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.272 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1489928 00:18:21.529 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:21.529 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:21.529 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1489928' 00:18:21.529 killing process with pid 1489928 00:18:21.529 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1489928 00:18:21.529 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1489928 00:18:21.529 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:21.529 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:21.529 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:21.529 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.529 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1491690 00:18:21.529 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:21.529 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1491690 00:18:21.529 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1491690 ']' 00:18:21.529 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.529 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.529 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.529 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.529 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.789 [2024-12-09 18:07:44.607257] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:18:21.789 [2024-12-09 18:07:44.607354] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.789 [2024-12-09 18:07:44.681782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.789 [2024-12-09 18:07:44.737561] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.789 [2024-12-09 18:07:44.737624] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.789 [2024-12-09 18:07:44.737638] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.789 [2024-12-09 18:07:44.737650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.789 [2024-12-09 18:07:44.737674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.789 [2024-12-09 18:07:44.738260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.047 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.047 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:22.047 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:22.047 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:22.047 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.047 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.047 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.z7LdbJtCei 00:18:22.047 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:22.047 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.z7LdbJtCei 00:18:22.047 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:22.047 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.047 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:22.047 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.047 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.z7LdbJtCei 00:18:22.047 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z7LdbJtCei 00:18:22.047 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:22.304 [2024-12-09 18:07:45.146819] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.305 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:22.561 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:22.819 [2024-12-09 18:07:45.672182] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:22.819 [2024-12-09 18:07:45.672455] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.819 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:23.077 malloc0 00:18:23.077 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:23.334 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z7LdbJtCei 00:18:23.592 [2024-12-09 18:07:46.469652] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.z7LdbJtCei': 0100666 00:18:23.592 [2024-12-09 18:07:46.469689] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:23.592 request: 00:18:23.592 { 00:18:23.592 "name": "key0", 00:18:23.592 "path": "/tmp/tmp.z7LdbJtCei", 00:18:23.592 "method": "keyring_file_add_key", 00:18:23.592 "req_id": 1 00:18:23.592 } 00:18:23.592 Got JSON-RPC error response 00:18:23.592 response: 00:18:23.592 { 00:18:23.592 "code": -1, 00:18:23.592 "message": "Operation not permitted" 00:18:23.592 } 00:18:23.592 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:23.851 [2024-12-09 18:07:46.734424] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:23.851 [2024-12-09 18:07:46.734495] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:23.851 request: 00:18:23.851 { 00:18:23.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.851 "host": "nqn.2016-06.io.spdk:host1", 00:18:23.851 "psk": "key0", 00:18:23.851 "method": "nvmf_subsystem_add_host", 00:18:23.851 "req_id": 1 00:18:23.851 } 00:18:23.851 Got JSON-RPC error response 00:18:23.851 response: 00:18:23.851 { 00:18:23.851 "code": -32603, 00:18:23.851 "message": "Internal error" 00:18:23.851 } 00:18:23.851 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:23.851 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:23.851 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:23.851 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:23.851 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1491690 00:18:23.851 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1491690 ']' 00:18:23.851 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1491690 00:18:23.851 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:23.851 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.851 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1491690 00:18:23.851 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:23.851 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:23.851 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1491690' 00:18:23.851 killing process with pid 1491690 00:18:23.851 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1491690 00:18:23.851 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1491690 00:18:24.110 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.z7LdbJtCei 00:18:24.110 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:24.110 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:24.110 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:24.110 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.110 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1491999 00:18:24.110 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:24.110 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1491999 00:18:24.110 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1491999 ']' 00:18:24.110 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.110 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:24.110 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.110 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:24.110 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.110 [2024-12-09 18:07:47.086574] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:18:24.110 [2024-12-09 18:07:47.086656] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.368 [2024-12-09 18:07:47.158336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.368 [2024-12-09 18:07:47.210949] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.368 [2024-12-09 18:07:47.211012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.368 [2024-12-09 18:07:47.211041] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.368 [2024-12-09 18:07:47.211052] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.368 [2024-12-09 18:07:47.211061] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.368 [2024-12-09 18:07:47.211650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.368 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.368 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:24.368 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:24.368 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:24.368 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.368 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.368 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.z7LdbJtCei 00:18:24.368 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z7LdbJtCei 00:18:24.368 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:24.626 [2024-12-09 18:07:47.603456] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.626 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:24.884 18:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:25.141 [2024-12-09 18:07:48.128934] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:25.142 [2024-12-09 18:07:48.129182] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.142 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:25.400 malloc0 00:18:25.400 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:25.659 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z7LdbJtCei 00:18:25.918 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:26.485 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1492281 00:18:26.485 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:26.485 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:26.485 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1492281 /var/tmp/bdevperf.sock 00:18:26.485 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1492281 ']' 00:18:26.485 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:26.485 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.486 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:26.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:26.486 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.486 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.486 [2024-12-09 18:07:49.267714] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:18:26.486 [2024-12-09 18:07:49.267794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1492281 ] 00:18:26.486 [2024-12-09 18:07:49.333977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.486 [2024-12-09 18:07:49.389887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.486 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.486 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:26.486 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z7LdbJtCei 00:18:26.744 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:27.002 [2024-12-09 18:07:50.026446] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:27.260 TLSTESTn1 00:18:27.260 18:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:27.519 18:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:27.519 "subsystems": [ 00:18:27.519 { 00:18:27.519 "subsystem": "keyring", 00:18:27.519 "config": [ 00:18:27.519 { 00:18:27.519 "method": "keyring_file_add_key", 00:18:27.519 "params": { 00:18:27.519 "name": "key0", 00:18:27.519 "path": "/tmp/tmp.z7LdbJtCei" 00:18:27.519 } 00:18:27.519 } 00:18:27.519 ] 00:18:27.519 }, 00:18:27.519 { 00:18:27.519 "subsystem": "iobuf", 00:18:27.519 "config": [ 00:18:27.519 { 00:18:27.519 "method": "iobuf_set_options", 00:18:27.519 "params": { 00:18:27.519 "small_pool_count": 8192, 00:18:27.519 "large_pool_count": 1024, 00:18:27.519 "small_bufsize": 8192, 00:18:27.519 "large_bufsize": 135168, 00:18:27.519 "enable_numa": false 00:18:27.519 } 00:18:27.519 } 00:18:27.519 ] 00:18:27.519 }, 00:18:27.519 { 00:18:27.519 "subsystem": "sock", 00:18:27.519 "config": [ 00:18:27.519 { 00:18:27.519 "method": "sock_set_default_impl", 00:18:27.519 "params": { 00:18:27.519 "impl_name": "posix" 00:18:27.519 } 00:18:27.519 }, 00:18:27.519 { 00:18:27.519 "method": "sock_impl_set_options", 00:18:27.519 "params": { 00:18:27.519 "impl_name": "ssl", 00:18:27.519 "recv_buf_size": 4096, 00:18:27.519 "send_buf_size": 4096, 00:18:27.519 "enable_recv_pipe": true, 00:18:27.519 "enable_quickack": false, 00:18:27.519 "enable_placement_id": 0, 00:18:27.519 "enable_zerocopy_send_server": true, 00:18:27.519 "enable_zerocopy_send_client": false, 00:18:27.519 "zerocopy_threshold": 0, 00:18:27.519 "tls_version": 0, 00:18:27.519 "enable_ktls": false 00:18:27.519 } 00:18:27.519 }, 00:18:27.519 { 00:18:27.519 "method": "sock_impl_set_options", 00:18:27.519 "params": { 00:18:27.519 "impl_name": "posix", 00:18:27.519 "recv_buf_size": 2097152, 00:18:27.519 "send_buf_size": 2097152, 00:18:27.519 "enable_recv_pipe": true, 00:18:27.519 "enable_quickack": false, 00:18:27.519 "enable_placement_id": 0, 00:18:27.519 "enable_zerocopy_send_server": true, 00:18:27.519 "enable_zerocopy_send_client": false, 00:18:27.519 "zerocopy_threshold": 0, 00:18:27.519 "tls_version": 0, 00:18:27.519 "enable_ktls": false 00:18:27.519 } 00:18:27.519 } 00:18:27.519 ] 00:18:27.519 }, 00:18:27.519 { 00:18:27.519 "subsystem": "vmd", 00:18:27.519 "config": [] 00:18:27.519 }, 00:18:27.519 { 00:18:27.519 "subsystem": "accel", 00:18:27.519 "config": [ 00:18:27.519 { 00:18:27.519 "method": "accel_set_options", 00:18:27.519 "params": { 00:18:27.519 "small_cache_size": 128, 00:18:27.519 "large_cache_size": 16, 00:18:27.519 "task_count": 2048, 00:18:27.519 "sequence_count": 2048, 00:18:27.519 "buf_count": 2048 00:18:27.519 } 00:18:27.519 } 00:18:27.519 ] 00:18:27.519 }, 00:18:27.519 { 00:18:27.519 "subsystem": "bdev", 00:18:27.519 "config": [ 00:18:27.519 { 00:18:27.519 "method": "bdev_set_options", 00:18:27.519 "params": { 00:18:27.519 "bdev_io_pool_size": 65535, 00:18:27.519 "bdev_io_cache_size": 256, 00:18:27.519 "bdev_auto_examine": true, 00:18:27.519 "iobuf_small_cache_size": 128, 00:18:27.519 "iobuf_large_cache_size": 16 00:18:27.519 } 00:18:27.519 }, 00:18:27.519 { 00:18:27.519 "method": "bdev_raid_set_options", 00:18:27.519 "params": { 00:18:27.519 "process_window_size_kb": 1024, 00:18:27.519 "process_max_bandwidth_mb_sec": 0 00:18:27.519 } 00:18:27.519 }, 00:18:27.519 { 00:18:27.519 "method": "bdev_iscsi_set_options", 00:18:27.519 "params": { 00:18:27.519 "timeout_sec": 30 00:18:27.519 } 00:18:27.519 }, 00:18:27.519 { 00:18:27.519 "method": "bdev_nvme_set_options", 00:18:27.519 "params": { 00:18:27.519 "action_on_timeout": "none", 00:18:27.519 "timeout_us": 0, 00:18:27.519 "timeout_admin_us": 0, 00:18:27.519 "keep_alive_timeout_ms": 10000, 00:18:27.519 "arbitration_burst": 0, 00:18:27.519 "low_priority_weight": 0, 00:18:27.519 "medium_priority_weight": 0, 00:18:27.519 "high_priority_weight": 0, 00:18:27.519 "nvme_adminq_poll_period_us": 10000, 00:18:27.519 "nvme_ioq_poll_period_us": 0, 00:18:27.519 "io_queue_requests": 0, 00:18:27.519 "delay_cmd_submit": true, 00:18:27.519 "transport_retry_count": 4, 00:18:27.519 "bdev_retry_count": 3, 00:18:27.519 "transport_ack_timeout": 0, 00:18:27.519 "ctrlr_loss_timeout_sec": 0, 00:18:27.519 "reconnect_delay_sec": 0, 00:18:27.519 "fast_io_fail_timeout_sec": 0, 00:18:27.519 "disable_auto_failback": false, 00:18:27.519 "generate_uuids": false, 00:18:27.519 "transport_tos": 0, 00:18:27.519 "nvme_error_stat": false, 00:18:27.519 "rdma_srq_size": 0, 00:18:27.519 "io_path_stat": false, 00:18:27.519 "allow_accel_sequence": false, 00:18:27.519 "rdma_max_cq_size": 0, 00:18:27.519 "rdma_cm_event_timeout_ms": 0, 00:18:27.519 "dhchap_digests": [ 00:18:27.519 "sha256", 00:18:27.519 "sha384", 00:18:27.519 "sha512" 00:18:27.519 ], 00:18:27.519 "dhchap_dhgroups": [ 00:18:27.519 "null", 00:18:27.519 "ffdhe2048", 00:18:27.519 "ffdhe3072", 00:18:27.519 "ffdhe4096", 00:18:27.519 "ffdhe6144", 00:18:27.519 "ffdhe8192" 00:18:27.519 ] 00:18:27.519 } 00:18:27.519 }, 00:18:27.519 { 00:18:27.519 "method": "bdev_nvme_set_hotplug", 00:18:27.519 "params": { 00:18:27.519 "period_us": 100000, 00:18:27.519 "enable": false 00:18:27.519 } 00:18:27.519 }, 00:18:27.519 { 00:18:27.519 "method": "bdev_malloc_create", 00:18:27.519 "params": { 00:18:27.519 "name": "malloc0", 00:18:27.519 "num_blocks": 8192, 00:18:27.519 "block_size": 4096, 00:18:27.519 "physical_block_size": 4096, 00:18:27.519 "uuid": "89cb075e-e809-4e15-8b36-7baed6e16f23", 00:18:27.519 "optimal_io_boundary": 0, 00:18:27.519 "md_size": 0, 00:18:27.519 "dif_type": 0, 00:18:27.519 "dif_is_head_of_md": false, 00:18:27.519 "dif_pi_format": 0 00:18:27.519 } 00:18:27.519 }, 00:18:27.519 { 00:18:27.519 "method": "bdev_wait_for_examine" 00:18:27.519 } 00:18:27.519 ] 00:18:27.519 }, 00:18:27.519 { 00:18:27.519 "subsystem": "nbd", 00:18:27.519 "config": [] 00:18:27.519 }, 00:18:27.519 { 00:18:27.519 "subsystem": "scheduler", 00:18:27.519 "config": [ 00:18:27.519 { 00:18:27.519 "method": "framework_set_scheduler", 00:18:27.519 "params": { 00:18:27.519 "name": "static" 00:18:27.519 } 00:18:27.519 } 00:18:27.519 ] 00:18:27.519 }, 00:18:27.519 { 00:18:27.519 "subsystem": "nvmf", 00:18:27.519 "config": [ 00:18:27.519 { 00:18:27.519 "method": "nvmf_set_config", 00:18:27.519 "params": { 00:18:27.519 "discovery_filter": "match_any", 00:18:27.519 "admin_cmd_passthru": { 00:18:27.519 "identify_ctrlr": false 00:18:27.519 }, 00:18:27.519 "dhchap_digests": [ 00:18:27.519 "sha256", 00:18:27.519 "sha384", 00:18:27.519 "sha512" 00:18:27.519 ], 00:18:27.519 "dhchap_dhgroups": [ 00:18:27.519 "null", 00:18:27.519 "ffdhe2048", 00:18:27.519 "ffdhe3072", 00:18:27.519 "ffdhe4096", 00:18:27.519 "ffdhe6144", 00:18:27.519 "ffdhe8192" 00:18:27.519 ] 00:18:27.519 } 00:18:27.519 }, 00:18:27.519 { 00:18:27.519 "method": "nvmf_set_max_subsystems", 00:18:27.519 "params": { 00:18:27.519 "max_subsystems": 1024 00:18:27.519 } 00:18:27.519 }, 00:18:27.519 { 00:18:27.519 "method": "nvmf_set_crdt", 00:18:27.519 "params": { 00:18:27.519 "crdt1": 0, 00:18:27.519 "crdt2": 0, 00:18:27.519 "crdt3": 0 00:18:27.519 } 00:18:27.519 }, 00:18:27.519 { 00:18:27.519 "method": "nvmf_create_transport", 00:18:27.519 "params": { 00:18:27.519 "trtype": "TCP", 00:18:27.519 "max_queue_depth": 128, 00:18:27.519 "max_io_qpairs_per_ctrlr": 127, 00:18:27.519 "in_capsule_data_size": 4096, 00:18:27.519 "max_io_size": 131072, 00:18:27.519 "io_unit_size": 131072, 00:18:27.519 "max_aq_depth": 128, 00:18:27.519 "num_shared_buffers": 511, 00:18:27.519 "buf_cache_size": 4294967295, 00:18:27.520 "dif_insert_or_strip": false, 00:18:27.520 "zcopy": false, 00:18:27.520 "c2h_success": false, 00:18:27.520 "sock_priority": 0, 00:18:27.520 "abort_timeout_sec": 1, 00:18:27.520 "ack_timeout": 0, 00:18:27.520 "data_wr_pool_size": 0 00:18:27.520 } 00:18:27.520 }, 00:18:27.520 { 00:18:27.520 "method": "nvmf_create_subsystem", 00:18:27.520 "params": { 00:18:27.520 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.520 "allow_any_host": false, 00:18:27.520 "serial_number": "SPDK00000000000001", 00:18:27.520 "model_number": "SPDK bdev Controller", 00:18:27.520 "max_namespaces": 10, 00:18:27.520 "min_cntlid": 1, 00:18:27.520 "max_cntlid": 65519, 00:18:27.520 "ana_reporting": false 00:18:27.520 } 00:18:27.520 }, 00:18:27.520 { 00:18:27.520 "method": "nvmf_subsystem_add_host", 00:18:27.520 "params": { 00:18:27.520 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.520 "host": "nqn.2016-06.io.spdk:host1", 00:18:27.520 "psk": "key0" 00:18:27.520 } 00:18:27.520 }, 00:18:27.520 { 00:18:27.520 "method": "nvmf_subsystem_add_ns", 00:18:27.520 "params": { 00:18:27.520 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.520 "namespace": { 00:18:27.520 "nsid": 1, 00:18:27.520 "bdev_name": "malloc0", 00:18:27.520 "nguid": "89CB075EE8094E158B367BAED6E16F23", 00:18:27.520 "uuid": "89cb075e-e809-4e15-8b36-7baed6e16f23", 00:18:27.520 "no_auto_visible": false 00:18:27.520 } 00:18:27.520 } 00:18:27.520 }, 00:18:27.520 { 00:18:27.520 "method": "nvmf_subsystem_add_listener", 00:18:27.520 "params": { 00:18:27.520 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.520 "listen_address": { 00:18:27.520 "trtype": "TCP", 00:18:27.520 "adrfam": "IPv4", 00:18:27.520 "traddr": "10.0.0.2", 00:18:27.520 "trsvcid": "4420" 00:18:27.520 }, 00:18:27.520 "secure_channel": true 00:18:27.520 } 00:18:27.520 } 00:18:27.520 ] 00:18:27.520 } 00:18:27.520 ] 00:18:27.520 }' 00:18:27.520 18:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:27.778 18:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:27.778 "subsystems": [ 00:18:27.778 { 00:18:27.778 "subsystem": "keyring", 00:18:27.778 "config": [ 00:18:27.778 { 00:18:27.778 "method": "keyring_file_add_key", 00:18:27.778 "params": { 00:18:27.778 "name": "key0", 00:18:27.778 "path": "/tmp/tmp.z7LdbJtCei" 00:18:27.778 } 00:18:27.778 } 00:18:27.778 ] 00:18:27.778 }, 00:18:27.778 { 00:18:27.778 "subsystem": "iobuf", 00:18:27.778 "config": [ 00:18:27.778 { 00:18:27.778 "method": "iobuf_set_options", 00:18:27.779 "params": { 00:18:27.779 "small_pool_count": 8192, 00:18:27.779 "large_pool_count": 1024, 00:18:27.779 "small_bufsize": 8192, 00:18:27.779 "large_bufsize": 135168, 00:18:27.779 "enable_numa": false 00:18:27.779 } 00:18:27.779 } 00:18:27.779 ] 00:18:27.779 }, 00:18:27.779 { 00:18:27.779 "subsystem": "sock", 00:18:27.779 "config": [ 00:18:27.779 { 00:18:27.779 "method": "sock_set_default_impl", 00:18:27.779 "params": { 00:18:27.779 "impl_name": "posix" 00:18:27.779 } 00:18:27.779 }, 00:18:27.779 { 00:18:27.779 "method": "sock_impl_set_options", 00:18:27.779 "params": { 00:18:27.779 "impl_name": "ssl", 00:18:27.779 "recv_buf_size": 4096, 00:18:27.779 "send_buf_size": 4096, 00:18:27.779 "enable_recv_pipe": true, 00:18:27.779 "enable_quickack": false, 00:18:27.779 "enable_placement_id": 0, 00:18:27.779 "enable_zerocopy_send_server": true, 00:18:27.779 "enable_zerocopy_send_client": false, 00:18:27.779 "zerocopy_threshold": 0, 00:18:27.779 "tls_version": 0, 00:18:27.779 "enable_ktls": false 00:18:27.779 } 00:18:27.779 }, 00:18:27.779 { 00:18:27.779 "method": "sock_impl_set_options", 00:18:27.779 "params": { 00:18:27.779 "impl_name": "posix", 00:18:27.779 "recv_buf_size": 2097152, 00:18:27.779 "send_buf_size": 2097152, 00:18:27.779 "enable_recv_pipe": true, 00:18:27.779 "enable_quickack": false, 00:18:27.779 "enable_placement_id": 0, 00:18:27.779 "enable_zerocopy_send_server": true, 00:18:27.779 "enable_zerocopy_send_client": false, 00:18:27.779 "zerocopy_threshold": 0, 00:18:27.779 "tls_version": 0, 00:18:27.779 "enable_ktls": false 00:18:27.779 } 00:18:27.779 } 00:18:27.779 ] 00:18:27.779 }, 00:18:27.779 { 00:18:27.779 "subsystem": "vmd", 00:18:27.779 "config": [] 00:18:27.779 }, 00:18:27.779 { 00:18:27.779 "subsystem": "accel", 00:18:27.779 "config": [ 00:18:27.779 { 00:18:27.779 "method": "accel_set_options", 00:18:27.779 "params": { 00:18:27.779 "small_cache_size": 128, 00:18:27.779 "large_cache_size": 16, 00:18:27.779 "task_count": 2048, 00:18:27.779 "sequence_count": 2048, 00:18:27.779 "buf_count": 2048 00:18:27.779 } 00:18:27.779 } 00:18:27.779 ] 00:18:27.779 }, 00:18:27.779 { 00:18:27.779 "subsystem": "bdev", 00:18:27.779 "config": [ 00:18:27.779 { 00:18:27.779 "method": "bdev_set_options", 00:18:27.779 "params": { 00:18:27.779 "bdev_io_pool_size": 65535, 00:18:27.779 "bdev_io_cache_size": 256, 00:18:27.779 "bdev_auto_examine": true, 00:18:27.779 "iobuf_small_cache_size": 128, 00:18:27.779 "iobuf_large_cache_size": 16 00:18:27.779 } 00:18:27.779 }, 00:18:27.779 { 00:18:27.779 "method": "bdev_raid_set_options", 00:18:27.779 "params": { 00:18:27.779 "process_window_size_kb": 1024, 00:18:27.779 "process_max_bandwidth_mb_sec": 0 00:18:27.779 } 00:18:27.779 }, 00:18:27.779 { 00:18:27.779 "method": "bdev_iscsi_set_options", 00:18:27.779 "params": { 00:18:27.779 "timeout_sec": 30 00:18:27.779 } 00:18:27.779 }, 00:18:27.779 { 00:18:27.779 "method": "bdev_nvme_set_options", 00:18:27.779 "params": { 00:18:27.779 "action_on_timeout": "none", 00:18:27.779 "timeout_us": 0, 00:18:27.779 "timeout_admin_us": 0, 00:18:27.779 "keep_alive_timeout_ms": 10000, 00:18:27.779 "arbitration_burst": 0, 00:18:27.779 "low_priority_weight": 0, 00:18:27.779 "medium_priority_weight": 0, 00:18:27.779 "high_priority_weight": 0, 00:18:27.779 "nvme_adminq_poll_period_us": 10000, 00:18:27.779 "nvme_ioq_poll_period_us": 0, 00:18:27.779 "io_queue_requests": 512, 00:18:27.779 "delay_cmd_submit": true, 00:18:27.779 "transport_retry_count": 4, 00:18:27.779 "bdev_retry_count": 3, 00:18:27.779 "transport_ack_timeout": 0, 00:18:27.779 "ctrlr_loss_timeout_sec": 0, 00:18:27.779 "reconnect_delay_sec": 0, 00:18:27.779 "fast_io_fail_timeout_sec": 0, 00:18:27.779 "disable_auto_failback": false, 00:18:27.779 "generate_uuids": false, 00:18:27.779 "transport_tos": 0, 00:18:27.779 "nvme_error_stat": false, 00:18:27.779 "rdma_srq_size": 0, 00:18:27.779 "io_path_stat": false, 00:18:27.779 "allow_accel_sequence": false, 00:18:27.779 "rdma_max_cq_size": 0, 00:18:27.779 "rdma_cm_event_timeout_ms": 0, 00:18:27.779 "dhchap_digests": [ 00:18:27.779 "sha256", 00:18:27.779 "sha384", 00:18:27.779 "sha512" 00:18:27.779 ], 00:18:27.779 "dhchap_dhgroups": [ 00:18:27.779 "null", 00:18:27.779 "ffdhe2048", 00:18:27.779 "ffdhe3072", 00:18:27.779 "ffdhe4096", 00:18:27.779 "ffdhe6144", 00:18:27.779 "ffdhe8192" 00:18:27.779 ] 00:18:27.779 } 00:18:27.779 }, 00:18:27.779 { 00:18:27.779 "method": "bdev_nvme_attach_controller", 00:18:27.779 "params": { 00:18:27.779 "name": "TLSTEST", 00:18:27.779 "trtype": "TCP", 00:18:27.779 "adrfam": "IPv4", 00:18:27.779 "traddr": "10.0.0.2", 00:18:27.779 "trsvcid": "4420", 00:18:27.779 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.779 "prchk_reftag": false, 00:18:27.779 "prchk_guard": false, 00:18:27.779 "ctrlr_loss_timeout_sec": 0, 00:18:27.779 "reconnect_delay_sec": 0, 00:18:27.779 "fast_io_fail_timeout_sec": 0, 00:18:27.779 "psk": "key0", 00:18:27.779 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:27.779 "hdgst": false, 00:18:27.779 "ddgst": false, 00:18:27.779 "multipath": "multipath" 00:18:27.779 } 00:18:27.779 }, 00:18:27.779 { 00:18:27.779 "method": "bdev_nvme_set_hotplug", 00:18:27.779 "params": { 00:18:27.779 "period_us": 100000, 00:18:27.779 "enable": false 00:18:27.779 } 00:18:27.779 }, 00:18:27.779 { 00:18:27.779 "method": "bdev_wait_for_examine" 00:18:27.779 } 00:18:27.779 ] 00:18:27.779 }, 00:18:27.779 { 00:18:27.779 "subsystem": "nbd", 00:18:27.779 "config": [] 00:18:27.779 } 00:18:27.779 ] 00:18:27.779 }' 00:18:27.779 18:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1492281 00:18:27.779 18:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1492281 ']' 00:18:27.779 18:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1492281 00:18:27.779 18:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:27.779 18:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.779 18:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1492281 00:18:28.037 18:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:28.037 18:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:28.037 18:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1492281' 00:18:28.037 killing process with pid 1492281 00:18:28.037 18:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1492281 00:18:28.037 Received shutdown signal, test time was about 10.000000 seconds 00:18:28.037 00:18:28.037 Latency(us) 00:18:28.037 [2024-12-09T17:07:51.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.037 [2024-12-09T17:07:51.078Z] =================================================================================================================== 00:18:28.037 [2024-12-09T17:07:51.078Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:28.037 18:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1492281 00:18:28.037 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1491999 00:18:28.037 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1491999 ']' 00:18:28.037 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1491999 00:18:28.037 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:28.038 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.038 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1491999 00:18:28.296 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:28.296 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:28.296 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1491999' 00:18:28.296 killing process with pid 1491999 00:18:28.296 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1491999 00:18:28.296 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1491999 00:18:28.554 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:28.554 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:28.554 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.554 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:28.554 "subsystems": [ 00:18:28.554 { 00:18:28.554 "subsystem": "keyring", 00:18:28.554 "config": [ 00:18:28.554 { 00:18:28.554 "method": "keyring_file_add_key", 00:18:28.554 "params": { 00:18:28.554 "name": "key0", 00:18:28.554 "path": "/tmp/tmp.z7LdbJtCei" 00:18:28.554 } 00:18:28.554 } 00:18:28.554 ] 00:18:28.554 }, 00:18:28.554 { 00:18:28.554 "subsystem": "iobuf", 00:18:28.554 "config": [ 00:18:28.554 { 00:18:28.554 "method": "iobuf_set_options", 00:18:28.554 "params": { 00:18:28.554 "small_pool_count": 8192, 00:18:28.554 "large_pool_count": 1024, 00:18:28.554 "small_bufsize": 8192, 00:18:28.554 "large_bufsize": 135168, 00:18:28.554 "enable_numa": false 00:18:28.554 } 00:18:28.554 } 00:18:28.554 ] 00:18:28.554 }, 00:18:28.554 { 00:18:28.554 "subsystem": "sock", 00:18:28.555 "config": [ 00:18:28.555 { 00:18:28.555 "method": "sock_set_default_impl", 00:18:28.555 "params": { 00:18:28.555 "impl_name": "posix" 00:18:28.555 } 00:18:28.555 }, 00:18:28.555 { 00:18:28.555 "method": "sock_impl_set_options", 00:18:28.555 "params": { 00:18:28.555 "impl_name": "ssl", 00:18:28.555 "recv_buf_size": 4096, 00:18:28.555 "send_buf_size": 4096, 00:18:28.555 "enable_recv_pipe": true, 00:18:28.555 "enable_quickack": false, 00:18:28.555 "enable_placement_id": 0, 00:18:28.555 "enable_zerocopy_send_server": true, 00:18:28.555 "enable_zerocopy_send_client": false, 00:18:28.555 "zerocopy_threshold": 0, 00:18:28.555 "tls_version": 0, 00:18:28.555 "enable_ktls": false 00:18:28.555 } 00:18:28.555 }, 00:18:28.555 { 00:18:28.555 "method": "sock_impl_set_options", 00:18:28.555 "params": { 00:18:28.555 "impl_name": "posix", 00:18:28.555 "recv_buf_size": 2097152, 00:18:28.555 "send_buf_size": 2097152, 00:18:28.555 "enable_recv_pipe": true, 00:18:28.555 "enable_quickack": false, 00:18:28.555 "enable_placement_id": 0, 00:18:28.555 "enable_zerocopy_send_server": true, 00:18:28.555 "enable_zerocopy_send_client": false, 00:18:28.555 "zerocopy_threshold": 0, 00:18:28.555 "tls_version": 0, 00:18:28.555 "enable_ktls": false 00:18:28.555 } 00:18:28.555 } 00:18:28.555 ] 00:18:28.555 }, 00:18:28.555 { 00:18:28.555 "subsystem": "vmd", 00:18:28.555 "config": [] 00:18:28.555 }, 00:18:28.555 { 00:18:28.555 "subsystem": "accel", 00:18:28.555 "config": [ 00:18:28.555 { 00:18:28.555 "method": "accel_set_options", 00:18:28.555 "params": { 00:18:28.555 "small_cache_size": 128, 00:18:28.555 "large_cache_size": 16, 00:18:28.555 "task_count": 2048, 00:18:28.555 "sequence_count": 2048, 00:18:28.555 "buf_count": 2048 00:18:28.555 } 00:18:28.555 } 00:18:28.555 ] 00:18:28.555 }, 00:18:28.555 { 00:18:28.555 "subsystem": "bdev", 00:18:28.555 "config": [ 00:18:28.555 { 00:18:28.555 "method": "bdev_set_options", 00:18:28.555 "params": { 00:18:28.555 "bdev_io_pool_size": 65535, 00:18:28.555 "bdev_io_cache_size": 256, 00:18:28.555 "bdev_auto_examine": true, 00:18:28.555 "iobuf_small_cache_size": 128, 00:18:28.555 "iobuf_large_cache_size": 16 00:18:28.555 } 00:18:28.555 }, 00:18:28.555 { 00:18:28.555 "method": "bdev_raid_set_options", 00:18:28.555 "params": { 00:18:28.555 "process_window_size_kb": 1024, 00:18:28.555 "process_max_bandwidth_mb_sec": 0 00:18:28.555 } 00:18:28.555 }, 00:18:28.555 { 00:18:28.555 "method": "bdev_iscsi_set_options", 00:18:28.555 "params": { 00:18:28.555 "timeout_sec": 30 00:18:28.555 } 00:18:28.555 }, 00:18:28.555 { 00:18:28.555 "method": "bdev_nvme_set_options", 00:18:28.555 "params": { 00:18:28.555 "action_on_timeout": "none", 00:18:28.555 "timeout_us": 0, 00:18:28.555 "timeout_admin_us": 0, 00:18:28.555 "keep_alive_timeout_ms": 10000, 00:18:28.555 "arbitration_burst": 0, 00:18:28.555 "low_priority_weight": 0, 00:18:28.555 "medium_priority_weight": 0, 00:18:28.555 "high_priority_weight": 0, 00:18:28.555 "nvme_adminq_poll_period_us": 10000, 00:18:28.555 "nvme_ioq_poll_period_us": 0, 00:18:28.555 "io_queue_requests": 0, 00:18:28.555 "delay_cmd_submit": true, 00:18:28.555 "transport_retry_count": 4, 00:18:28.555 "bdev_retry_count": 3, 00:18:28.555 "transport_ack_timeout": 0, 00:18:28.555 "ctrlr_loss_timeout_sec": 0, 00:18:28.555 "reconnect_delay_sec": 0, 00:18:28.555 "fast_io_fail_timeout_sec": 0, 00:18:28.555 "disable_auto_failback": false, 00:18:28.555 "generate_uuids": false, 00:18:28.555 "transport_tos": 0, 00:18:28.555 "nvme_error_stat": false, 00:18:28.555 "rdma_srq_size": 0, 00:18:28.555 "io_path_stat": false, 00:18:28.555 "allow_accel_sequence": false, 00:18:28.555 "rdma_max_cq_size": 0, 00:18:28.555 "rdma_cm_event_timeout_ms": 0, 00:18:28.555 "dhchap_digests": [ 00:18:28.555 "sha256", 00:18:28.555 "sha384", 00:18:28.555 "sha512" 00:18:28.555 ], 00:18:28.555 "dhchap_dhgroups": [ 00:18:28.555 "null", 00:18:28.555 "ffdhe2048", 00:18:28.555 "ffdhe3072", 00:18:28.555 "ffdhe4096", 00:18:28.555 "ffdhe6144", 00:18:28.555 "ffdhe8192" 00:18:28.555 ] 00:18:28.555 } 00:18:28.555 }, 00:18:28.555 { 00:18:28.555 "method": "bdev_nvme_set_hotplug", 00:18:28.555 "params": { 00:18:28.555 "period_us": 100000, 00:18:28.555 "enable": false 00:18:28.555 } 00:18:28.555 }, 00:18:28.555 { 00:18:28.555 "method": "bdev_malloc_create", 00:18:28.555 "params": { 00:18:28.555 "name": "malloc0", 00:18:28.555 "num_blocks": 8192, 00:18:28.555 "block_size": 4096, 00:18:28.555 "physical_block_size": 4096, 00:18:28.555 "uuid": "89cb075e-e809-4e15-8b36-7baed6e16f23", 00:18:28.555 "optimal_io_boundary": 0, 00:18:28.555 "md_size": 0, 00:18:28.555 "dif_type": 0, 00:18:28.555 "dif_is_head_of_md": false, 00:18:28.555 "dif_pi_format": 0 00:18:28.555 } 00:18:28.555 }, 00:18:28.555 { 00:18:28.555 "method": "bdev_wait_for_examine" 00:18:28.555 } 00:18:28.555 ] 00:18:28.555 }, 00:18:28.555 { 00:18:28.555 "subsystem": "nbd", 00:18:28.555 "config": [] 00:18:28.555 }, 00:18:28.555 { 00:18:28.555 "subsystem": "scheduler", 00:18:28.555 "config": [ 00:18:28.555 { 00:18:28.555 "method": "framework_set_scheduler", 00:18:28.555 "params": { 00:18:28.555 "name": "static" 00:18:28.555 } 00:18:28.555 } 00:18:28.555 ] 00:18:28.555 }, 00:18:28.555 { 00:18:28.555 "subsystem": "nvmf", 00:18:28.555 "config": [ 00:18:28.555 { 00:18:28.555 "method": "nvmf_set_config", 00:18:28.555 "params": { 00:18:28.555 "discovery_filter": "match_any", 00:18:28.555 "admin_cmd_passthru": { 00:18:28.555 "identify_ctrlr": false 00:18:28.555 }, 00:18:28.555 "dhchap_digests": [ 00:18:28.555 "sha256", 00:18:28.555 "sha384", 00:18:28.555 "sha512" 00:18:28.555 ], 00:18:28.555 "dhchap_dhgroups": [ 00:18:28.555 "null", 00:18:28.555 "ffdhe2048", 00:18:28.555 "ffdhe3072", 00:18:28.555 "ffdhe4096", 00:18:28.555 "ffdhe6144", 00:18:28.555 "ffdhe8192" 00:18:28.555 ] 00:18:28.555 } 00:18:28.555 }, 00:18:28.555 { 00:18:28.555 "method": "nvmf_set_max_subsystems", 00:18:28.555 "params": { 00:18:28.555 "max_subsystems": 1024 00:18:28.555 } 00:18:28.555 }, 00:18:28.555 { 00:18:28.555 "method": "nvmf_set_crdt", 00:18:28.555 "params": { 00:18:28.555 "crdt1": 0, 00:18:28.555 "crdt2": 0, 00:18:28.555 "crdt3": 0 00:18:28.555 } 00:18:28.555 }, 00:18:28.555 { 00:18:28.555 "method": "nvmf_create_transport", 00:18:28.555 "params": { 00:18:28.555 "trtype": "TCP", 00:18:28.555 "max_queue_depth": 128, 00:18:28.555 "max_io_qpairs_per_ctrlr": 127, 00:18:28.555 "in_capsule_data_size": 4096, 00:18:28.555 "max_io_size": 131072, 00:18:28.555 "io_unit_size": 131072, 00:18:28.555 "max_aq_depth": 128, 00:18:28.555 "num_shared_buffers": 511, 00:18:28.555 "buf_cache_size": 4294967295, 00:18:28.555 "dif_insert_or_strip": false, 00:18:28.555 "zcopy": false, 00:18:28.555 "c2h_success": false, 00:18:28.555 "sock_priority": 0, 00:18:28.555 "abort_timeout_sec": 1, 00:18:28.555 "ack_timeout": 0, 00:18:28.555 "data_wr_pool_size": 0 00:18:28.555 } 00:18:28.555 }, 00:18:28.555 { 00:18:28.555 "method": "nvmf_create_subsystem", 00:18:28.555 "params": { 00:18:28.555 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.555 "allow_any_host": false, 00:18:28.555 "serial_number": "SPDK00000000000001", 00:18:28.555 "model_number": "SPDK bdev Controller", 00:18:28.555 "max_namespaces": 10, 00:18:28.555 "min_cntlid": 1, 00:18:28.555 "max_cntlid": 65519, 00:18:28.555 "ana_reporting": false 00:18:28.555 } 00:18:28.555 }, 00:18:28.555 { 00:18:28.555 "method": "nvmf_subsystem_add_host", 00:18:28.555 "params": { 00:18:28.555 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.555 "host": "nqn.2016-06.io.spdk:host1", 00:18:28.555 "psk": "key0" 00:18:28.556 } 00:18:28.556 }, 00:18:28.556 { 00:18:28.556 "method": "nvmf_subsystem_add_ns", 00:18:28.556 "params": { 00:18:28.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.556 "namespace": { 00:18:28.556 "nsid": 1, 00:18:28.556 "bdev_name": "malloc0", 00:18:28.556 "nguid": "89CB075EE8094E158B367BAED6E16F23", 00:18:28.556 "uuid": "89cb075e-e809-4e15-8b36-7baed6e16f23", 00:18:28.556 "no_auto_visible": false 00:18:28.556 } 00:18:28.556 } 00:18:28.556 }, 00:18:28.556 { 00:18:28.556 "method": "nvmf_subsystem_add_listener", 00:18:28.556 "params": { 00:18:28.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.556 "listen_address": { 00:18:28.556 "trtype": "TCP", 00:18:28.556 "adrfam": "IPv4", 00:18:28.556 "traddr": "10.0.0.2", 00:18:28.556 "trsvcid": "4420" 00:18:28.556 }, 00:18:28.556 "secure_channel": true 00:18:28.556 } 00:18:28.556 } 00:18:28.556 ] 00:18:28.556 } 00:18:28.556 ] 00:18:28.556 }' 00:18:28.556 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.556 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1492557 00:18:28.556 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:28.556 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1492557 00:18:28.556 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1492557 ']' 00:18:28.556 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.556 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.556 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.556 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.556 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.556 [2024-12-09 18:07:51.399231] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:18:28.556 [2024-12-09 18:07:51.399325] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.556 [2024-12-09 18:07:51.473837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.556 [2024-12-09 18:07:51.530008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.556 [2024-12-09 18:07:51.530063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.556 [2024-12-09 18:07:51.530091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.556 [2024-12-09 18:07:51.530102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.556 [2024-12-09 18:07:51.530111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.556 [2024-12-09 18:07:51.530728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.814 [2024-12-09 18:07:51.765286] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.814 [2024-12-09 18:07:51.797290] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:28.814 [2024-12-09 18:07:51.797572] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.382 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.382 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:29.382 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:29.382 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:29.382 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.382 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.382 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1492709 00:18:29.382 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1492709 /var/tmp/bdevperf.sock 00:18:29.382 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1492709 ']' 00:18:29.382 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:29.382 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.382 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:29.382 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:29.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:29.383 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.383 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.383 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:29.383 "subsystems": [ 00:18:29.383 { 00:18:29.383 "subsystem": "keyring", 00:18:29.383 "config": [ 00:18:29.383 { 00:18:29.383 "method": "keyring_file_add_key", 00:18:29.383 "params": { 00:18:29.383 "name": "key0", 00:18:29.383 "path": "/tmp/tmp.z7LdbJtCei" 00:18:29.383 } 00:18:29.383 } 00:18:29.383 ] 00:18:29.383 }, 00:18:29.383 { 00:18:29.383 "subsystem": "iobuf", 00:18:29.383 "config": [ 00:18:29.383 { 00:18:29.383 "method": "iobuf_set_options", 00:18:29.383 "params": { 00:18:29.383 "small_pool_count": 8192, 00:18:29.383 "large_pool_count": 1024, 00:18:29.383 "small_bufsize": 8192, 00:18:29.383 "large_bufsize": 135168, 00:18:29.383 "enable_numa": false 00:18:29.383 } 00:18:29.383 } 00:18:29.383 ] 00:18:29.383 }, 00:18:29.383 { 00:18:29.383 "subsystem": "sock", 00:18:29.383 "config": [ 00:18:29.383 { 00:18:29.383 "method": "sock_set_default_impl", 00:18:29.383 "params": { 00:18:29.383 "impl_name": "posix" 00:18:29.383 } 00:18:29.383 }, 00:18:29.383 { 00:18:29.383 "method": "sock_impl_set_options", 00:18:29.383 "params": { 00:18:29.383 "impl_name": "ssl", 00:18:29.383 "recv_buf_size": 4096, 00:18:29.383 "send_buf_size": 4096, 00:18:29.383 "enable_recv_pipe": true, 00:18:29.383 "enable_quickack": false, 00:18:29.383 "enable_placement_id": 0, 00:18:29.383 "enable_zerocopy_send_server": true, 00:18:29.383 "enable_zerocopy_send_client": false, 00:18:29.383 "zerocopy_threshold": 0, 00:18:29.383 "tls_version": 0, 00:18:29.383 "enable_ktls": false 00:18:29.383 } 00:18:29.383 }, 00:18:29.383 { 00:18:29.383 "method": "sock_impl_set_options", 00:18:29.383 "params": { 00:18:29.383 "impl_name": "posix", 00:18:29.383 "recv_buf_size": 2097152, 00:18:29.383 "send_buf_size": 2097152, 00:18:29.383 "enable_recv_pipe": true, 00:18:29.383 "enable_quickack": false, 00:18:29.383 "enable_placement_id": 0, 00:18:29.383 "enable_zerocopy_send_server": true, 00:18:29.383 "enable_zerocopy_send_client": false, 00:18:29.383 "zerocopy_threshold": 0, 00:18:29.383 "tls_version": 0, 00:18:29.383 "enable_ktls": false 00:18:29.383 } 00:18:29.383 } 00:18:29.383 ] 00:18:29.383 }, 00:18:29.383 { 00:18:29.383 "subsystem": "vmd", 00:18:29.383 "config": [] 00:18:29.383 }, 00:18:29.383 { 00:18:29.383 "subsystem": "accel", 00:18:29.383 "config": [ 00:18:29.383 { 00:18:29.383 "method": "accel_set_options", 00:18:29.383 "params": { 00:18:29.383 "small_cache_size": 128, 00:18:29.383 "large_cache_size": 16, 00:18:29.383 "task_count": 2048, 00:18:29.383 "sequence_count": 2048, 00:18:29.383 "buf_count": 2048 00:18:29.383 } 00:18:29.383 } 00:18:29.383 ] 00:18:29.383 }, 00:18:29.383 { 00:18:29.383 "subsystem": "bdev", 00:18:29.383 "config": [ 00:18:29.383 { 00:18:29.383 "method": "bdev_set_options", 00:18:29.383 "params": { 00:18:29.383 "bdev_io_pool_size": 65535, 00:18:29.383 "bdev_io_cache_size": 256, 00:18:29.383 "bdev_auto_examine": true, 00:18:29.383 "iobuf_small_cache_size": 128, 00:18:29.383 "iobuf_large_cache_size": 16 00:18:29.383 } 00:18:29.383 }, 00:18:29.383 { 00:18:29.383 "method": "bdev_raid_set_options", 00:18:29.383 "params": { 00:18:29.383 "process_window_size_kb": 1024, 00:18:29.383 "process_max_bandwidth_mb_sec": 0 00:18:29.383 } 00:18:29.383 }, 00:18:29.383 { 00:18:29.383 "method": "bdev_iscsi_set_options", 00:18:29.383 "params": { 00:18:29.383 "timeout_sec": 30 00:18:29.383 } 00:18:29.383 }, 00:18:29.383 { 00:18:29.383 "method": "bdev_nvme_set_options", 00:18:29.383 "params": { 00:18:29.383 "action_on_timeout": "none", 00:18:29.383 "timeout_us": 0, 00:18:29.383 "timeout_admin_us": 0, 00:18:29.383 "keep_alive_timeout_ms": 10000, 00:18:29.383 "arbitration_burst": 0, 00:18:29.383 "low_priority_weight": 0, 00:18:29.383 "medium_priority_weight": 0, 00:18:29.383 "high_priority_weight": 0, 00:18:29.383 "nvme_adminq_poll_period_us": 10000, 00:18:29.383 "nvme_ioq_poll_period_us": 0, 00:18:29.383 "io_queue_requests": 512, 00:18:29.383 "delay_cmd_submit": true, 00:18:29.383 "transport_retry_count": 4, 00:18:29.383 "bdev_retry_count": 3, 00:18:29.383 "transport_ack_timeout": 0, 00:18:29.383 "ctrlr_loss_timeout_sec": 0, 00:18:29.383 "reconnect_delay_sec": 0, 00:18:29.383 "fast_io_fail_timeout_sec": 0, 00:18:29.383 "disable_auto_failback": false, 00:18:29.383 "generate_uuids": false, 00:18:29.383 "transport_tos": 0, 00:18:29.383 "nvme_error_stat": false, 00:18:29.383 "rdma_srq_size": 0, 00:18:29.383 "io_path_stat": false, 00:18:29.383 "allow_accel_sequence": false, 00:18:29.383 "rdma_max_cq_size": 0, 00:18:29.383 "rdma_cm_event_timeout_ms": 0, 00:18:29.383 "dhchap_digests": [ 00:18:29.383 "sha256", 00:18:29.383 "sha384", 00:18:29.383 "sha512" 00:18:29.383 ], 00:18:29.383 "dhchap_dhgroups": [ 00:18:29.383 "null", 00:18:29.383 "ffdhe2048", 00:18:29.383 "ffdhe3072", 00:18:29.383 "ffdhe4096", 00:18:29.383 "ffdhe6144", 00:18:29.383 "ffdhe8192" 00:18:29.383 ] 00:18:29.383 } 00:18:29.383 }, 00:18:29.383 { 00:18:29.383 "method": "bdev_nvme_attach_controller", 00:18:29.383 "params": { 00:18:29.383 "name": "TLSTEST", 00:18:29.383 "trtype": "TCP", 00:18:29.383 "adrfam": "IPv4", 00:18:29.383 "traddr": "10.0.0.2", 00:18:29.383 "trsvcid": "4420", 00:18:29.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.383 "prchk_reftag": false, 00:18:29.383 "prchk_guard": false, 00:18:29.383 "ctrlr_loss_timeout_sec": 0, 00:18:29.383 "reconnect_delay_sec": 0, 00:18:29.383 "fast_io_fail_timeout_sec": 0, 00:18:29.383 "psk": "key0", 00:18:29.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:29.383 "hdgst": false, 00:18:29.383 "ddgst": false, 00:18:29.383 "multipath": "multipath" 00:18:29.383 } 00:18:29.383 }, 00:18:29.383 { 00:18:29.383 "method": "bdev_nvme_set_hotplug", 00:18:29.383 "params": { 00:18:29.383 "period_us": 100000, 00:18:29.383 "enable": false 00:18:29.383 } 00:18:29.383 }, 00:18:29.383 { 00:18:29.383 "method": "bdev_wait_for_examine" 00:18:29.383 } 00:18:29.383 ] 00:18:29.383 }, 00:18:29.383 { 00:18:29.383 "subsystem": "nbd", 00:18:29.383 "config": [] 00:18:29.383 } 00:18:29.383 ] 00:18:29.383 }' 00:18:29.642 [2024-12-09 18:07:52.454653] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:18:29.642 [2024-12-09 18:07:52.454737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1492709 ] 00:18:29.642 [2024-12-09 18:07:52.520225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.642 [2024-12-09 18:07:52.577009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.900 [2024-12-09 18:07:52.755982] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:29.900 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.900 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:29.900 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:30.160 Running I/O for 10 seconds... 00:18:32.033 3528.00 IOPS, 13.78 MiB/s [2024-12-09T17:07:56.010Z] 3485.50 IOPS, 13.62 MiB/s [2024-12-09T17:07:57.385Z] 3533.00 IOPS, 13.80 MiB/s [2024-12-09T17:07:58.324Z] 3542.25 IOPS, 13.84 MiB/s [2024-12-09T17:07:59.263Z] 3543.80 IOPS, 13.84 MiB/s [2024-12-09T17:08:00.200Z] 3556.00 IOPS, 13.89 MiB/s [2024-12-09T17:08:01.140Z] 3555.14 IOPS, 13.89 MiB/s [2024-12-09T17:08:02.076Z] 3565.88 IOPS, 13.93 MiB/s [2024-12-09T17:08:03.049Z] 3569.56 IOPS, 13.94 MiB/s [2024-12-09T17:08:03.049Z] 3573.90 IOPS, 13.96 MiB/s 00:18:40.008 Latency(us) 00:18:40.008 [2024-12-09T17:08:03.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.008 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:40.008 Verification LBA range: start 0x0 length 0x2000 00:18:40.008 TLSTESTn1 : 10.02 3579.08 13.98 0.00 0.00 35702.32 8495.41 43108.12 00:18:40.008 [2024-12-09T17:08:03.049Z] =================================================================================================================== 00:18:40.008 [2024-12-09T17:08:03.049Z] Total : 3579.08 13.98 0.00 0.00 35702.32 8495.41 43108.12 00:18:40.008 { 00:18:40.008 "results": [ 00:18:40.008 { 00:18:40.008 "job": "TLSTESTn1", 00:18:40.008 "core_mask": "0x4", 00:18:40.008 "workload": "verify", 00:18:40.008 "status": "finished", 00:18:40.008 "verify_range": { 00:18:40.008 "start": 0, 00:18:40.008 "length": 8192 00:18:40.008 }, 00:18:40.008 "queue_depth": 128, 00:18:40.008 "io_size": 4096, 00:18:40.008 "runtime": 10.021004, 00:18:40.008 "iops": 3579.082495127235, 00:18:40.008 "mibps": 13.98079099659076, 00:18:40.008 "io_failed": 0, 00:18:40.008 "io_timeout": 0, 00:18:40.008 "avg_latency_us": 35702.317278264156, 00:18:40.008 "min_latency_us": 8495.407407407407, 00:18:40.008 "max_latency_us": 43108.124444444446 00:18:40.008 } 00:18:40.008 ], 00:18:40.008 "core_count": 1 00:18:40.008 } 00:18:40.268 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:40.268 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1492709 00:18:40.268 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1492709 ']' 00:18:40.268 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1492709 00:18:40.268 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:40.268 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.268 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1492709 00:18:40.268 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:40.268 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:40.268 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1492709' 00:18:40.268 killing process with pid 1492709 00:18:40.268 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1492709 00:18:40.268 Received shutdown signal, test time was about 10.000000 seconds 00:18:40.268 00:18:40.268 Latency(us) 00:18:40.268 [2024-12-09T17:08:03.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.268 [2024-12-09T17:08:03.309Z] =================================================================================================================== 00:18:40.268 [2024-12-09T17:08:03.309Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:40.268 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1492709 00:18:40.268 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1492557 00:18:40.268 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1492557 ']' 00:18:40.268 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1492557 00:18:40.268 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:40.268 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.268 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1492557 00:18:40.526 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:40.526 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:40.526 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1492557' 00:18:40.526 killing process with pid 1492557 00:18:40.526 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1492557 00:18:40.526 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1492557 00:18:40.526 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:40.526 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:40.526 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:40.526 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.526 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1494035 00:18:40.526 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:40.526 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1494035 00:18:40.526 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1494035 ']' 00:18:40.526 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.526 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.526 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.526 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.526 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.786 [2024-12-09 18:08:03.591904] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:18:40.786 [2024-12-09 18:08:03.591981] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.786 [2024-12-09 18:08:03.662433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.786 [2024-12-09 18:08:03.717794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.786 [2024-12-09 18:08:03.717849] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.786 [2024-12-09 18:08:03.717863] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.786 [2024-12-09 18:08:03.717874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.786 [2024-12-09 18:08:03.717884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.786 [2024-12-09 18:08:03.718462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.044 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.044 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:41.044 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:41.044 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:41.044 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.044 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.044 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.z7LdbJtCei 00:18:41.044 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z7LdbJtCei 00:18:41.044 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:41.303 [2024-12-09 18:08:04.102003] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.303 18:08:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:41.560 18:08:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:41.818 [2024-12-09 18:08:04.643448] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:41.818 [2024-12-09 18:08:04.643715] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.818 18:08:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:42.076 malloc0 00:18:42.076 18:08:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:42.335 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z7LdbJtCei 00:18:42.592 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:42.850 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1494322 00:18:42.850 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:42.850 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:42.850 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1494322 /var/tmp/bdevperf.sock 00:18:42.850 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1494322 ']' 00:18:42.850 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:42.850 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.850 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:42.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:42.850 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.850 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.850 [2024-12-09 18:08:05.778168] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:18:42.850 [2024-12-09 18:08:05.778245] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494322 ] 00:18:42.850 [2024-12-09 18:08:05.843191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.108 [2024-12-09 18:08:05.899145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.108 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.108 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:43.108 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z7LdbJtCei 00:18:43.366 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:43.624 [2024-12-09 18:08:06.530580] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:43.624 nvme0n1 00:18:43.624 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:43.882 Running I/O for 1 seconds... 00:18:44.820 3493.00 IOPS, 13.64 MiB/s 00:18:44.820 Latency(us) 00:18:44.820 [2024-12-09T17:08:07.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.820 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:44.820 Verification LBA range: start 0x0 length 0x2000 00:18:44.820 nvme0n1 : 1.03 3505.29 13.69 0.00 0.00 36006.25 5995.33 40972.14 00:18:44.820 [2024-12-09T17:08:07.861Z] =================================================================================================================== 00:18:44.820 [2024-12-09T17:08:07.861Z] Total : 3505.29 13.69 0.00 0.00 36006.25 5995.33 40972.14 00:18:44.820 { 00:18:44.820 "results": [ 00:18:44.820 { 00:18:44.820 "job": "nvme0n1", 00:18:44.820 "core_mask": "0x2", 00:18:44.820 "workload": "verify", 00:18:44.820 "status": "finished", 00:18:44.820 "verify_range": { 00:18:44.820 "start": 0, 00:18:44.820 "length": 8192 00:18:44.820 }, 00:18:44.820 "queue_depth": 128, 00:18:44.820 "io_size": 4096, 00:18:44.820 "runtime": 1.03301, 00:18:44.820 "iops": 3505.290365049709, 00:18:44.820 "mibps": 13.692540488475426, 00:18:44.820 "io_failed": 0, 00:18:44.820 "io_timeout": 0, 00:18:44.820 "avg_latency_us": 36006.252763406876, 00:18:44.820 "min_latency_us": 5995.3303703703705, 00:18:44.820 "max_latency_us": 40972.136296296296 00:18:44.820 } 00:18:44.820 ], 00:18:44.820 "core_count": 1 00:18:44.820 } 00:18:44.820 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1494322 00:18:44.820 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1494322 ']' 00:18:44.820 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1494322 00:18:44.820 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:44.820 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.820 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1494322 00:18:44.820 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:44.820 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:44.820 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1494322' 00:18:44.820 killing process with pid 1494322 00:18:44.820 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1494322 00:18:44.820 Received shutdown signal, test time was about 1.000000 seconds 00:18:44.820 00:18:44.820 Latency(us) 00:18:44.820 [2024-12-09T17:08:07.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.820 [2024-12-09T17:08:07.861Z] =================================================================================================================== 00:18:44.820 [2024-12-09T17:08:07.861Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:44.820 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1494322 00:18:45.080 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1494035 00:18:45.080 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1494035 ']' 00:18:45.080 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1494035 00:18:45.080 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:45.080 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.080 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1494035 00:18:45.080 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:45.080 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:45.080 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1494035' 00:18:45.080 killing process with pid 1494035 00:18:45.080 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1494035 00:18:45.080 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1494035 00:18:45.339 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:45.339 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:45.339 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.339 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.339 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1494608 00:18:45.339 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:45.339 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1494608 00:18:45.339 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1494608 ']' 00:18:45.339 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.339 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.339 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.339 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.339 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.339 [2024-12-09 18:08:08.378156] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:18:45.339 [2024-12-09 18:08:08.378245] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.597 [2024-12-09 18:08:08.452476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.597 [2024-12-09 18:08:08.507003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.597 [2024-12-09 18:08:08.507058] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.597 [2024-12-09 18:08:08.507097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.597 [2024-12-09 18:08:08.507109] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.597 [2024-12-09 18:08:08.507118] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.597 [2024-12-09 18:08:08.507727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.597 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.597 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:45.597 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:45.597 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:45.597 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.855 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.855 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:45.855 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.855 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.855 [2024-12-09 18:08:08.651976] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.855 malloc0 00:18:45.855 [2024-12-09 18:08:08.682628] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:45.855 [2024-12-09 18:08:08.682919] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.855 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.856 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1494633 00:18:45.856 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:45.856 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1494633 /var/tmp/bdevperf.sock 00:18:45.856 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1494633 ']' 00:18:45.856 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:45.856 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.856 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:45.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:45.856 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.856 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.856 [2024-12-09 18:08:08.754748] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:18:45.856 [2024-12-09 18:08:08.754814] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494633 ] 00:18:45.856 [2024-12-09 18:08:08.824287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.856 [2024-12-09 18:08:08.881666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.113 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.113 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:46.113 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z7LdbJtCei 00:18:46.371 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:46.629 [2024-12-09 18:08:09.505754] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:46.629 nvme0n1 00:18:46.629 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:46.887 Running I/O for 1 seconds... 00:18:47.827 3339.00 IOPS, 13.04 MiB/s 00:18:47.827 Latency(us) 00:18:47.827 [2024-12-09T17:08:10.868Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.827 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:47.827 Verification LBA range: start 0x0 length 0x2000 00:18:47.827 nvme0n1 : 1.02 3391.41 13.25 0.00 0.00 37417.78 8204.14 34758.35 00:18:47.827 [2024-12-09T17:08:10.868Z] =================================================================================================================== 00:18:47.827 [2024-12-09T17:08:10.868Z] Total : 3391.41 13.25 0.00 0.00 37417.78 8204.14 34758.35 00:18:47.827 { 00:18:47.827 "results": [ 00:18:47.827 { 00:18:47.827 "job": "nvme0n1", 00:18:47.827 "core_mask": "0x2", 00:18:47.827 "workload": "verify", 00:18:47.827 "status": "finished", 00:18:47.827 "verify_range": { 00:18:47.827 "start": 0, 00:18:47.827 "length": 8192 00:18:47.827 }, 00:18:47.827 "queue_depth": 128, 00:18:47.827 "io_size": 4096, 00:18:47.827 "runtime": 1.02229, 00:18:47.827 "iops": 3391.405569848086, 00:18:47.827 "mibps": 13.247678007219086, 00:18:47.827 "io_failed": 0, 00:18:47.827 "io_timeout": 0, 00:18:47.827 "avg_latency_us": 37417.78021515025, 00:18:47.827 "min_latency_us": 8204.136296296296, 00:18:47.827 "max_latency_us": 34758.35259259259 00:18:47.827 } 00:18:47.827 ], 00:18:47.827 "core_count": 1 00:18:47.827 } 00:18:47.827 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:47.827 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.827 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.827 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.827 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:47.827 "subsystems": [ 00:18:47.827 { 00:18:47.827 "subsystem": "keyring", 00:18:47.827 "config": [ 00:18:47.827 { 00:18:47.827 "method": "keyring_file_add_key", 00:18:47.827 "params": { 00:18:47.827 "name": "key0", 00:18:47.827 "path": "/tmp/tmp.z7LdbJtCei" 00:18:47.827 } 00:18:47.827 } 00:18:47.827 ] 00:18:47.827 }, 00:18:47.827 { 00:18:47.827 "subsystem": "iobuf", 00:18:47.827 "config": [ 00:18:47.827 { 00:18:47.827 "method": "iobuf_set_options", 00:18:47.827 "params": { 00:18:47.827 "small_pool_count": 8192, 00:18:47.827 "large_pool_count": 1024, 00:18:47.827 "small_bufsize": 8192, 00:18:47.827 "large_bufsize": 135168, 00:18:47.827 "enable_numa": false 00:18:47.827 } 00:18:47.827 } 00:18:47.827 ] 00:18:47.827 }, 00:18:47.827 { 00:18:47.827 "subsystem": "sock", 00:18:47.827 "config": [ 00:18:47.827 { 00:18:47.827 "method": "sock_set_default_impl", 00:18:47.827 "params": { 00:18:47.827 "impl_name": "posix" 00:18:47.827 } 00:18:47.827 }, 00:18:47.827 { 00:18:47.827 "method": "sock_impl_set_options", 00:18:47.827 "params": { 00:18:47.827 "impl_name": "ssl", 00:18:47.827 "recv_buf_size": 4096, 00:18:47.827 "send_buf_size": 4096, 00:18:47.827 "enable_recv_pipe": true, 00:18:47.827 "enable_quickack": false, 00:18:47.827 "enable_placement_id": 0, 00:18:47.827 "enable_zerocopy_send_server": true, 00:18:47.827 "enable_zerocopy_send_client": false, 00:18:47.827 "zerocopy_threshold": 0, 00:18:47.827 "tls_version": 0, 00:18:47.828 "enable_ktls": false 00:18:47.828 } 00:18:47.828 }, 00:18:47.828 { 00:18:47.828 "method": "sock_impl_set_options", 00:18:47.828 "params": { 00:18:47.828 "impl_name": "posix", 00:18:47.828 "recv_buf_size": 2097152, 00:18:47.828 "send_buf_size": 2097152, 00:18:47.828 "enable_recv_pipe": true, 00:18:47.828 "enable_quickack": false, 00:18:47.828 "enable_placement_id": 0, 00:18:47.828 "enable_zerocopy_send_server": true, 00:18:47.828 "enable_zerocopy_send_client": false, 00:18:47.828 "zerocopy_threshold": 0, 00:18:47.828 "tls_version": 0, 00:18:47.828 "enable_ktls": false 00:18:47.828 } 00:18:47.828 } 00:18:47.828 ] 00:18:47.828 }, 00:18:47.828 { 00:18:47.828 "subsystem": "vmd", 00:18:47.828 "config": [] 00:18:47.828 }, 00:18:47.828 { 00:18:47.828 "subsystem": "accel", 00:18:47.828 "config": [ 00:18:47.828 { 00:18:47.828 "method": "accel_set_options", 00:18:47.828 "params": { 00:18:47.828 "small_cache_size": 128, 00:18:47.828 "large_cache_size": 16, 00:18:47.828 "task_count": 2048, 00:18:47.828 "sequence_count": 2048, 00:18:47.828 "buf_count": 2048 00:18:47.828 } 00:18:47.828 } 00:18:47.828 ] 00:18:47.828 }, 00:18:47.828 { 00:18:47.828 "subsystem": "bdev", 00:18:47.828 "config": [ 00:18:47.828 { 00:18:47.828 "method": "bdev_set_options", 00:18:47.828 "params": { 00:18:47.828 "bdev_io_pool_size": 65535, 00:18:47.828 "bdev_io_cache_size": 256, 00:18:47.828 "bdev_auto_examine": true, 00:18:47.828 "iobuf_small_cache_size": 128, 00:18:47.828 "iobuf_large_cache_size": 16 00:18:47.828 } 00:18:47.828 }, 00:18:47.828 { 00:18:47.828 "method": "bdev_raid_set_options", 00:18:47.828 "params": { 00:18:47.828 "process_window_size_kb": 1024, 00:18:47.828 "process_max_bandwidth_mb_sec": 0 00:18:47.828 } 00:18:47.828 }, 00:18:47.828 { 00:18:47.828 "method": "bdev_iscsi_set_options", 00:18:47.828 "params": { 00:18:47.828 "timeout_sec": 30 00:18:47.828 } 00:18:47.828 }, 00:18:47.828 { 00:18:47.828 "method": "bdev_nvme_set_options", 00:18:47.828 "params": { 00:18:47.828 "action_on_timeout": "none", 00:18:47.828 "timeout_us": 0, 00:18:47.828 "timeout_admin_us": 0, 00:18:47.828 "keep_alive_timeout_ms": 10000, 00:18:47.828 "arbitration_burst": 0, 00:18:47.828 "low_priority_weight": 0, 00:18:47.828 "medium_priority_weight": 0, 00:18:47.828 "high_priority_weight": 0, 00:18:47.828 "nvme_adminq_poll_period_us": 10000, 00:18:47.828 "nvme_ioq_poll_period_us": 0, 00:18:47.828 "io_queue_requests": 0, 00:18:47.828 "delay_cmd_submit": true, 00:18:47.828 "transport_retry_count": 4, 00:18:47.828 "bdev_retry_count": 3, 00:18:47.828 "transport_ack_timeout": 0, 00:18:47.828 "ctrlr_loss_timeout_sec": 0, 00:18:47.828 "reconnect_delay_sec": 0, 00:18:47.828 "fast_io_fail_timeout_sec": 0, 00:18:47.828 "disable_auto_failback": false, 00:18:47.828 "generate_uuids": false, 00:18:47.828 "transport_tos": 0, 00:18:47.828 "nvme_error_stat": false, 00:18:47.828 "rdma_srq_size": 0, 00:18:47.828 "io_path_stat": false, 00:18:47.828 "allow_accel_sequence": false, 00:18:47.828 "rdma_max_cq_size": 0, 00:18:47.828 "rdma_cm_event_timeout_ms": 0, 00:18:47.828 "dhchap_digests": [ 00:18:47.828 "sha256", 00:18:47.828 "sha384", 00:18:47.828 "sha512" 00:18:47.828 ], 00:18:47.828 "dhchap_dhgroups": [ 00:18:47.828 "null", 00:18:47.828 "ffdhe2048", 00:18:47.828 "ffdhe3072", 00:18:47.828 "ffdhe4096", 00:18:47.828 "ffdhe6144", 00:18:47.828 "ffdhe8192" 00:18:47.828 ] 00:18:47.828 } 00:18:47.828 }, 00:18:47.828 { 00:18:47.828 "method": "bdev_nvme_set_hotplug", 00:18:47.828 "params": { 00:18:47.828 "period_us": 100000, 00:18:47.828 "enable": false 00:18:47.828 } 00:18:47.828 }, 00:18:47.828 { 00:18:47.828 "method": "bdev_malloc_create", 00:18:47.828 "params": { 00:18:47.828 "name": "malloc0", 00:18:47.828 "num_blocks": 8192, 00:18:47.828 "block_size": 4096, 00:18:47.828 "physical_block_size": 4096, 00:18:47.828 "uuid": "cc8cf2c9-cbe9-4e56-8d2a-d77a71123520", 00:18:47.828 "optimal_io_boundary": 0, 00:18:47.828 "md_size": 0, 00:18:47.828 "dif_type": 0, 00:18:47.828 "dif_is_head_of_md": false, 00:18:47.828 "dif_pi_format": 0 00:18:47.828 } 00:18:47.828 }, 00:18:47.828 { 00:18:47.828 "method": "bdev_wait_for_examine" 00:18:47.828 } 00:18:47.828 ] 00:18:47.828 }, 00:18:47.828 { 00:18:47.828 "subsystem": "nbd", 00:18:47.828 "config": [] 00:18:47.828 }, 00:18:47.828 { 00:18:47.828 "subsystem": "scheduler", 00:18:47.828 "config": [ 00:18:47.828 { 00:18:47.828 "method": "framework_set_scheduler", 00:18:47.828 "params": { 00:18:47.828 "name": "static" 00:18:47.828 } 00:18:47.828 } 00:18:47.828 ] 00:18:47.828 }, 00:18:47.828 { 00:18:47.828 "subsystem": "nvmf", 00:18:47.828 "config": [ 00:18:47.828 { 00:18:47.828 "method": "nvmf_set_config", 00:18:47.828 "params": { 00:18:47.828 "discovery_filter": "match_any", 00:18:47.828 "admin_cmd_passthru": { 00:18:47.828 "identify_ctrlr": false 00:18:47.828 }, 00:18:47.828 "dhchap_digests": [ 00:18:47.828 "sha256", 00:18:47.828 "sha384", 00:18:47.828 "sha512" 00:18:47.828 ], 00:18:47.828 "dhchap_dhgroups": [ 00:18:47.828 "null", 00:18:47.828 "ffdhe2048", 00:18:47.828 "ffdhe3072", 00:18:47.828 "ffdhe4096", 00:18:47.828 "ffdhe6144", 00:18:47.828 "ffdhe8192" 00:18:47.828 ] 00:18:47.828 } 00:18:47.828 }, 00:18:47.828 { 00:18:47.828 "method": "nvmf_set_max_subsystems", 00:18:47.828 "params": { 00:18:47.828 "max_subsystems": 1024 00:18:47.828 } 00:18:47.828 }, 00:18:47.828 { 00:18:47.828 "method": "nvmf_set_crdt", 00:18:47.828 "params": { 00:18:47.828 "crdt1": 0, 00:18:47.828 "crdt2": 0, 00:18:47.828 "crdt3": 0 00:18:47.828 } 00:18:47.828 }, 00:18:47.828 { 00:18:47.828 "method": "nvmf_create_transport", 00:18:47.828 "params": { 00:18:47.828 "trtype": "TCP", 00:18:47.828 "max_queue_depth": 128, 00:18:47.828 "max_io_qpairs_per_ctrlr": 127, 00:18:47.828 "in_capsule_data_size": 4096, 00:18:47.828 "max_io_size": 131072, 00:18:47.828 "io_unit_size": 131072, 00:18:47.828 "max_aq_depth": 128, 00:18:47.828 "num_shared_buffers": 511, 00:18:47.828 "buf_cache_size": 4294967295, 00:18:47.828 "dif_insert_or_strip": false, 00:18:47.828 "zcopy": false, 00:18:47.828 "c2h_success": false, 00:18:47.828 "sock_priority": 0, 00:18:47.828 "abort_timeout_sec": 1, 00:18:47.828 "ack_timeout": 0, 00:18:47.828 "data_wr_pool_size": 0 00:18:47.828 } 00:18:47.828 }, 00:18:47.828 { 00:18:47.828 "method": "nvmf_create_subsystem", 00:18:47.828 "params": { 00:18:47.828 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.828 "allow_any_host": false, 00:18:47.828 "serial_number": "00000000000000000000", 00:18:47.828 "model_number": "SPDK bdev Controller", 00:18:47.828 "max_namespaces": 32, 00:18:47.828 "min_cntlid": 1, 00:18:47.828 "max_cntlid": 65519, 00:18:47.828 "ana_reporting": false 00:18:47.828 } 00:18:47.828 }, 00:18:47.828 { 00:18:47.828 "method": "nvmf_subsystem_add_host", 00:18:47.828 "params": { 00:18:47.828 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.828 "host": "nqn.2016-06.io.spdk:host1", 00:18:47.828 "psk": "key0" 00:18:47.828 } 00:18:47.828 }, 00:18:47.828 { 00:18:47.828 "method": "nvmf_subsystem_add_ns", 00:18:47.828 "params": { 00:18:47.828 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.828 "namespace": { 00:18:47.828 "nsid": 1, 00:18:47.828 "bdev_name": "malloc0", 00:18:47.828 "nguid": "CC8CF2C9CBE94E568D2AD77A71123520", 00:18:47.828 "uuid": "cc8cf2c9-cbe9-4e56-8d2a-d77a71123520", 00:18:47.828 "no_auto_visible": false 00:18:47.828 } 00:18:47.828 } 00:18:47.828 }, 00:18:47.828 { 00:18:47.828 "method": "nvmf_subsystem_add_listener", 00:18:47.828 "params": { 00:18:47.828 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.828 "listen_address": { 00:18:47.828 "trtype": "TCP", 00:18:47.828 "adrfam": "IPv4", 00:18:47.828 "traddr": "10.0.0.2", 00:18:47.828 "trsvcid": "4420" 00:18:47.828 }, 00:18:47.828 "secure_channel": false, 00:18:47.828 "sock_impl": "ssl" 00:18:47.828 } 00:18:47.828 } 00:18:47.828 ] 00:18:47.828 } 00:18:47.828 ] 00:18:47.828 }' 00:18:48.087 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:48.347 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:48.347 "subsystems": [ 00:18:48.347 { 00:18:48.347 "subsystem": "keyring", 00:18:48.347 "config": [ 00:18:48.347 { 00:18:48.347 "method": "keyring_file_add_key", 00:18:48.347 "params": { 00:18:48.347 "name": "key0", 00:18:48.347 "path": "/tmp/tmp.z7LdbJtCei" 00:18:48.347 } 00:18:48.347 } 00:18:48.347 ] 00:18:48.347 }, 00:18:48.347 { 00:18:48.347 "subsystem": "iobuf", 00:18:48.347 "config": [ 00:18:48.347 { 00:18:48.347 "method": "iobuf_set_options", 00:18:48.347 "params": { 00:18:48.347 "small_pool_count": 8192, 00:18:48.347 "large_pool_count": 1024, 00:18:48.347 "small_bufsize": 8192, 00:18:48.347 "large_bufsize": 135168, 00:18:48.347 "enable_numa": false 00:18:48.347 } 00:18:48.347 } 00:18:48.347 ] 00:18:48.347 }, 00:18:48.347 { 00:18:48.347 "subsystem": "sock", 00:18:48.347 "config": [ 00:18:48.347 { 00:18:48.347 "method": "sock_set_default_impl", 00:18:48.347 "params": { 00:18:48.347 "impl_name": "posix" 00:18:48.347 } 00:18:48.347 }, 00:18:48.347 { 00:18:48.347 "method": "sock_impl_set_options", 00:18:48.347 "params": { 00:18:48.347 "impl_name": "ssl", 00:18:48.347 "recv_buf_size": 4096, 00:18:48.347 "send_buf_size": 4096, 00:18:48.347 "enable_recv_pipe": true, 00:18:48.347 "enable_quickack": false, 00:18:48.347 "enable_placement_id": 0, 00:18:48.347 "enable_zerocopy_send_server": true, 00:18:48.347 "enable_zerocopy_send_client": false, 00:18:48.347 "zerocopy_threshold": 0, 00:18:48.347 "tls_version": 0, 00:18:48.347 "enable_ktls": false 00:18:48.347 } 00:18:48.347 }, 00:18:48.347 { 00:18:48.347 "method": "sock_impl_set_options", 00:18:48.347 "params": { 00:18:48.347 "impl_name": "posix", 00:18:48.347 "recv_buf_size": 2097152, 00:18:48.347 "send_buf_size": 2097152, 00:18:48.347 "enable_recv_pipe": true, 00:18:48.347 "enable_quickack": false, 00:18:48.347 "enable_placement_id": 0, 00:18:48.347 "enable_zerocopy_send_server": true, 00:18:48.347 "enable_zerocopy_send_client": false, 00:18:48.347 "zerocopy_threshold": 0, 00:18:48.347 "tls_version": 0, 00:18:48.347 "enable_ktls": false 00:18:48.347 } 00:18:48.347 } 00:18:48.347 ] 00:18:48.347 }, 00:18:48.347 { 00:18:48.347 "subsystem": "vmd", 00:18:48.347 "config": [] 00:18:48.347 }, 00:18:48.347 { 00:18:48.347 "subsystem": "accel", 00:18:48.347 "config": [ 00:18:48.347 { 00:18:48.347 "method": "accel_set_options", 00:18:48.347 "params": { 00:18:48.347 "small_cache_size": 128, 00:18:48.347 "large_cache_size": 16, 00:18:48.347 "task_count": 2048, 00:18:48.347 "sequence_count": 2048, 00:18:48.347 "buf_count": 2048 00:18:48.347 } 00:18:48.347 } 00:18:48.347 ] 00:18:48.347 }, 00:18:48.347 { 00:18:48.347 "subsystem": "bdev", 00:18:48.347 "config": [ 00:18:48.347 { 00:18:48.347 "method": "bdev_set_options", 00:18:48.347 "params": { 00:18:48.347 "bdev_io_pool_size": 65535, 00:18:48.347 "bdev_io_cache_size": 256, 00:18:48.347 "bdev_auto_examine": true, 00:18:48.347 "iobuf_small_cache_size": 128, 00:18:48.347 "iobuf_large_cache_size": 16 00:18:48.347 } 00:18:48.347 }, 00:18:48.347 { 00:18:48.347 "method": "bdev_raid_set_options", 00:18:48.347 "params": { 00:18:48.347 "process_window_size_kb": 1024, 00:18:48.347 "process_max_bandwidth_mb_sec": 0 00:18:48.347 } 00:18:48.347 }, 00:18:48.347 { 00:18:48.347 "method": "bdev_iscsi_set_options", 00:18:48.347 "params": { 00:18:48.347 "timeout_sec": 30 00:18:48.347 } 00:18:48.347 }, 00:18:48.347 { 00:18:48.347 "method": "bdev_nvme_set_options", 00:18:48.347 "params": { 00:18:48.347 "action_on_timeout": "none", 00:18:48.347 "timeout_us": 0, 00:18:48.347 "timeout_admin_us": 0, 00:18:48.347 "keep_alive_timeout_ms": 10000, 00:18:48.347 "arbitration_burst": 0, 00:18:48.347 "low_priority_weight": 0, 00:18:48.347 "medium_priority_weight": 0, 00:18:48.347 "high_priority_weight": 0, 00:18:48.347 "nvme_adminq_poll_period_us": 10000, 00:18:48.347 "nvme_ioq_poll_period_us": 0, 00:18:48.347 "io_queue_requests": 512, 00:18:48.347 "delay_cmd_submit": true, 00:18:48.347 "transport_retry_count": 4, 00:18:48.347 "bdev_retry_count": 3, 00:18:48.347 "transport_ack_timeout": 0, 00:18:48.347 "ctrlr_loss_timeout_sec": 0, 00:18:48.347 "reconnect_delay_sec": 0, 00:18:48.347 "fast_io_fail_timeout_sec": 0, 00:18:48.347 "disable_auto_failback": false, 00:18:48.347 "generate_uuids": false, 00:18:48.347 "transport_tos": 0, 00:18:48.347 "nvme_error_stat": false, 00:18:48.347 "rdma_srq_size": 0, 00:18:48.347 "io_path_stat": false, 00:18:48.347 "allow_accel_sequence": false, 00:18:48.347 "rdma_max_cq_size": 0, 00:18:48.347 "rdma_cm_event_timeout_ms": 0, 00:18:48.347 "dhchap_digests": [ 00:18:48.347 "sha256", 00:18:48.347 "sha384", 00:18:48.347 "sha512" 00:18:48.347 ], 00:18:48.347 "dhchap_dhgroups": [ 00:18:48.347 "null", 00:18:48.347 "ffdhe2048", 00:18:48.347 "ffdhe3072", 00:18:48.347 "ffdhe4096", 00:18:48.347 "ffdhe6144", 00:18:48.347 "ffdhe8192" 00:18:48.347 ] 00:18:48.347 } 00:18:48.347 }, 00:18:48.347 { 00:18:48.347 "method": "bdev_nvme_attach_controller", 00:18:48.347 "params": { 00:18:48.347 "name": "nvme0", 00:18:48.347 "trtype": "TCP", 00:18:48.347 "adrfam": "IPv4", 00:18:48.347 "traddr": "10.0.0.2", 00:18:48.347 "trsvcid": "4420", 00:18:48.347 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.347 "prchk_reftag": false, 00:18:48.347 "prchk_guard": false, 00:18:48.347 "ctrlr_loss_timeout_sec": 0, 00:18:48.347 "reconnect_delay_sec": 0, 00:18:48.347 "fast_io_fail_timeout_sec": 0, 00:18:48.347 "psk": "key0", 00:18:48.347 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:48.347 "hdgst": false, 00:18:48.347 "ddgst": false, 00:18:48.347 "multipath": "multipath" 00:18:48.347 } 00:18:48.347 }, 00:18:48.347 { 00:18:48.347 "method": "bdev_nvme_set_hotplug", 00:18:48.347 "params": { 00:18:48.347 "period_us": 100000, 00:18:48.347 "enable": false 00:18:48.347 } 00:18:48.347 }, 00:18:48.347 { 00:18:48.347 "method": "bdev_enable_histogram", 00:18:48.347 "params": { 00:18:48.347 "name": "nvme0n1", 00:18:48.347 "enable": true 00:18:48.347 } 00:18:48.347 }, 00:18:48.347 { 00:18:48.347 "method": "bdev_wait_for_examine" 00:18:48.347 } 00:18:48.347 ] 00:18:48.347 }, 00:18:48.347 { 00:18:48.347 "subsystem": "nbd", 00:18:48.347 "config": [] 00:18:48.347 } 00:18:48.347 ] 00:18:48.347 }' 00:18:48.347 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1494633 00:18:48.348 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1494633 ']' 00:18:48.348 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1494633 00:18:48.348 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:48.348 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.348 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1494633 00:18:48.348 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:48.348 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:48.348 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1494633' 00:18:48.348 killing process with pid 1494633 00:18:48.348 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1494633 00:18:48.348 Received shutdown signal, test time was about 1.000000 seconds 00:18:48.348 00:18:48.348 Latency(us) 00:18:48.348 [2024-12-09T17:08:11.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.348 [2024-12-09T17:08:11.389Z] =================================================================================================================== 00:18:48.348 [2024-12-09T17:08:11.389Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:48.348 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1494633 00:18:48.608 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1494608 00:18:48.608 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1494608 ']' 00:18:48.608 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1494608 00:18:48.608 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:48.608 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.608 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1494608 00:18:48.608 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:48.608 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:48.608 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1494608' 00:18:48.608 killing process with pid 1494608 00:18:48.608 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1494608 00:18:48.608 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1494608 00:18:48.867 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:48.867 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:48.867 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:48.867 "subsystems": [ 00:18:48.867 { 00:18:48.867 "subsystem": "keyring", 00:18:48.867 "config": [ 00:18:48.867 { 00:18:48.867 "method": "keyring_file_add_key", 00:18:48.867 "params": { 00:18:48.867 "name": "key0", 00:18:48.867 "path": "/tmp/tmp.z7LdbJtCei" 00:18:48.867 } 00:18:48.867 } 00:18:48.867 ] 00:18:48.867 }, 00:18:48.867 { 00:18:48.867 "subsystem": "iobuf", 00:18:48.867 "config": [ 00:18:48.867 { 00:18:48.867 "method": "iobuf_set_options", 00:18:48.867 "params": { 00:18:48.867 "small_pool_count": 8192, 00:18:48.867 "large_pool_count": 1024, 00:18:48.867 "small_bufsize": 8192, 00:18:48.867 "large_bufsize": 135168, 00:18:48.867 "enable_numa": false 00:18:48.867 } 00:18:48.867 } 00:18:48.867 ] 00:18:48.867 }, 00:18:48.867 { 00:18:48.867 "subsystem": "sock", 00:18:48.867 "config": [ 00:18:48.867 { 00:18:48.867 "method": "sock_set_default_impl", 00:18:48.867 "params": { 00:18:48.867 "impl_name": "posix" 00:18:48.867 } 00:18:48.867 }, 00:18:48.867 { 00:18:48.867 "method": "sock_impl_set_options", 00:18:48.867 "params": { 00:18:48.867 "impl_name": "ssl", 00:18:48.867 "recv_buf_size": 4096, 00:18:48.867 "send_buf_size": 4096, 00:18:48.867 "enable_recv_pipe": true, 00:18:48.867 "enable_quickack": false, 00:18:48.867 "enable_placement_id": 0, 00:18:48.867 "enable_zerocopy_send_server": true, 00:18:48.867 "enable_zerocopy_send_client": false, 00:18:48.867 "zerocopy_threshold": 0, 00:18:48.867 "tls_version": 0, 00:18:48.867 "enable_ktls": false 00:18:48.867 } 00:18:48.867 }, 00:18:48.867 { 00:18:48.867 "method": "sock_impl_set_options", 00:18:48.867 "params": { 00:18:48.867 "impl_name": "posix", 00:18:48.867 "recv_buf_size": 2097152, 00:18:48.867 "send_buf_size": 2097152, 00:18:48.867 "enable_recv_pipe": true, 00:18:48.867 "enable_quickack": false, 00:18:48.867 "enable_placement_id": 0, 00:18:48.867 "enable_zerocopy_send_server": true, 00:18:48.867 "enable_zerocopy_send_client": false, 00:18:48.867 "zerocopy_threshold": 0, 00:18:48.867 "tls_version": 0, 00:18:48.867 "enable_ktls": false 00:18:48.867 } 00:18:48.867 } 00:18:48.867 ] 00:18:48.867 }, 00:18:48.867 { 00:18:48.867 "subsystem": "vmd", 00:18:48.867 "config": [] 00:18:48.867 }, 00:18:48.867 { 00:18:48.867 "subsystem": "accel", 00:18:48.867 "config": [ 00:18:48.867 { 00:18:48.867 "method": "accel_set_options", 00:18:48.867 "params": { 00:18:48.867 "small_cache_size": 128, 00:18:48.867 "large_cache_size": 16, 00:18:48.867 "task_count": 2048, 00:18:48.867 "sequence_count": 2048, 00:18:48.867 "buf_count": 2048 00:18:48.867 } 00:18:48.867 } 00:18:48.867 ] 00:18:48.867 }, 00:18:48.868 { 00:18:48.868 "subsystem": "bdev", 00:18:48.868 "config": [ 00:18:48.868 { 00:18:48.868 "method": "bdev_set_options", 00:18:48.868 "params": { 00:18:48.868 "bdev_io_pool_size": 65535, 00:18:48.868 "bdev_io_cache_size": 256, 00:18:48.868 "bdev_auto_examine": true, 00:18:48.868 "iobuf_small_cache_size": 128, 00:18:48.868 "iobuf_large_cache_size": 16 00:18:48.868 } 00:18:48.868 }, 00:18:48.868 { 00:18:48.868 "method": "bdev_raid_set_options", 00:18:48.868 "params": { 00:18:48.868 "process_window_size_kb": 1024, 00:18:48.868 "process_max_bandwidth_mb_sec": 0 00:18:48.868 } 00:18:48.868 }, 00:18:48.868 { 00:18:48.868 "method": "bdev_iscsi_set_options", 00:18:48.868 "params": { 00:18:48.868 "timeout_sec": 30 00:18:48.868 } 00:18:48.868 }, 00:18:48.868 { 00:18:48.868 "method": "bdev_nvme_set_options", 00:18:48.868 "params": { 00:18:48.868 "action_on_timeout": "none", 00:18:48.868 "timeout_us": 0, 00:18:48.868 "timeout_admin_us": 0, 00:18:48.868 "keep_alive_timeout_ms": 10000, 00:18:48.868 "arbitration_burst": 0, 00:18:48.868 "low_priority_weight": 0, 00:18:48.868 "medium_priority_weight": 0, 00:18:48.868 "high_priority_weight": 0, 00:18:48.868 "nvme_adminq_poll_period_us": 10000, 00:18:48.868 "nvme_ioq_poll_period_us": 0, 00:18:48.868 "io_queue_requests": 0, 00:18:48.868 "delay_cmd_submit": true, 00:18:48.868 "transport_retry_count": 4, 00:18:48.868 "bdev_retry_count": 3, 00:18:48.868 "transport_ack_timeout": 0, 00:18:48.868 "ctrlr_loss_timeout_sec": 0, 00:18:48.868 "reconnect_delay_sec": 0, 00:18:48.868 "fast_io_fail_timeout_sec": 0, 00:18:48.868 "disable_auto_failback": false, 00:18:48.868 "generate_uuids": false, 00:18:48.868 "transport_tos": 0, 00:18:48.868 "nvme_error_stat": false, 00:18:48.868 "rdma_srq_size": 0, 00:18:48.868 "io_path_stat": false, 00:18:48.868 "allow_accel_sequence": false, 00:18:48.868 "rdma_max_cq_size": 0, 00:18:48.868 "rdma_cm_event_timeout_ms": 0, 00:18:48.868 "dhchap_digests": [ 00:18:48.868 "sha256", 00:18:48.868 "sha384", 00:18:48.868 "sha512" 00:18:48.868 ], 00:18:48.868 "dhchap_dhgroups": [ 00:18:48.868 "null", 00:18:48.868 "ffdhe2048", 00:18:48.868 "ffdhe3072", 00:18:48.868 "ffdhe4096", 00:18:48.868 "ffdhe6144", 00:18:48.868 "ffdhe8192" 00:18:48.868 ] 00:18:48.868 } 00:18:48.868 }, 00:18:48.868 { 00:18:48.868 "method": "bdev_nvme_set_hotplug", 00:18:48.868 "params": { 00:18:48.868 "period_us": 100000, 00:18:48.868 "enable": false 00:18:48.868 } 00:18:48.868 }, 00:18:48.868 { 00:18:48.868 "method": "bdev_malloc_create", 00:18:48.868 "params": { 00:18:48.868 "name": "malloc0", 00:18:48.868 "num_blocks": 8192, 00:18:48.868 "block_size": 4096, 00:18:48.868 "physical_block_size": 4096, 00:18:48.868 "uuid": "cc8cf2c9-cbe9-4e56-8d2a-d77a71123520", 00:18:48.868 "optimal_io_boundary": 0, 00:18:48.868 "md_size": 0, 00:18:48.868 "dif_type": 0, 00:18:48.868 "dif_is_head_of_md": false, 00:18:48.868 "dif_pi_format": 0 00:18:48.868 } 00:18:48.868 }, 00:18:48.868 { 00:18:48.868 "method": "bdev_wait_for_examine" 00:18:48.868 } 00:18:48.868 ] 00:18:48.868 }, 00:18:48.868 { 00:18:48.868 "subsystem": "nbd", 00:18:48.868 "config": [] 00:18:48.868 }, 00:18:48.868 { 00:18:48.868 "subsystem": "scheduler", 00:18:48.868 "config": [ 00:18:48.868 { 00:18:48.868 "method": "framework_set_scheduler", 00:18:48.868 "params": { 00:18:48.868 "name": "static" 00:18:48.868 } 00:18:48.868 } 00:18:48.868 ] 00:18:48.868 }, 00:18:48.868 { 00:18:48.868 "subsystem": "nvmf", 00:18:48.868 "config": [ 00:18:48.868 { 00:18:48.868 "method": "nvmf_set_config", 00:18:48.868 "params": { 00:18:48.868 "discovery_filter": "match_any", 00:18:48.868 "admin_cmd_passthru": { 00:18:48.868 "identify_ctrlr": false 00:18:48.868 }, 00:18:48.868 "dhchap_digests": [ 00:18:48.868 "sha256", 00:18:48.868 "sha384", 00:18:48.868 "sha512" 00:18:48.868 ], 00:18:48.868 "dhchap_dhgroups": [ 00:18:48.868 "null", 00:18:48.868 "ffdhe2048", 00:18:48.868 "ffdhe3072", 00:18:48.868 "ffdhe4096", 00:18:48.868 "ffdhe6144", 00:18:48.868 "ffdhe8192" 00:18:48.868 ] 00:18:48.868 } 00:18:48.868 }, 00:18:48.868 { 00:18:48.868 "method": "nvmf_set_max_subsystems", 00:18:48.868 "params": { 00:18:48.868 "max_subsystems": 1024 00:18:48.868 } 00:18:48.868 }, 00:18:48.868 { 00:18:48.868 "method": "nvmf_set_crdt", 00:18:48.868 "params": { 00:18:48.868 "crdt1": 0, 00:18:48.868 "crdt2": 0, 00:18:48.868 "crdt3": 0 00:18:48.868 } 00:18:48.868 }, 00:18:48.868 { 00:18:48.868 "method": "nvmf_create_transport", 00:18:48.868 "params": { 00:18:48.868 "trtype": "TCP", 00:18:48.868 "max_queue_depth": 128, 00:18:48.868 "max_io_qpairs_per_ctrlr": 127, 00:18:48.868 "in_capsule_data_size": 4096, 00:18:48.868 "max_io_size": 131072, 00:18:48.868 "io_unit_size": 131072, 00:18:48.868 "max_aq_depth": 128, 00:18:48.868 "num_shared_buffers": 511, 00:18:48.868 "buf_cache_size": 4294967295, 00:18:48.868 "dif_insert_or_strip": false, 00:18:48.868 "zcopy": false, 00:18:48.868 "c2h_success": false, 00:18:48.868 "sock_priority": 0, 00:18:48.868 "abort_timeout_sec": 1, 00:18:48.868 "ack_timeout": 0, 00:18:48.868 "data_wr_pool_size": 0 00:18:48.868 } 00:18:48.868 }, 00:18:48.868 { 00:18:48.868 "method": "nvmf_create_subsystem", 00:18:48.868 "params": { 00:18:48.868 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.868 "allow_any_host": false, 00:18:48.868 "serial_number": "00000000000000000000", 00:18:48.868 "model_number": "SPDK bdev Controller", 00:18:48.868 "max_namespaces": 32, 00:18:48.868 "min_cntlid": 1, 00:18:48.868 "max_cntlid": 65519, 00:18:48.868 "ana_reporting": false 00:18:48.868 } 00:18:48.868 }, 00:18:48.868 { 00:18:48.868 "method": "nvmf_subsystem_add_host", 00:18:48.868 "params": { 00:18:48.868 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.868 "host": "nqn.2016-06.io.spdk:host1", 00:18:48.868 "psk": "key0" 00:18:48.868 } 00:18:48.868 }, 00:18:48.868 { 00:18:48.868 "method": "nvmf_subsystem_add_ns", 00:18:48.868 "params": { 00:18:48.868 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.868 "namespace": { 00:18:48.868 "nsid": 1, 00:18:48.868 "bdev_name": "malloc0", 00:18:48.868 "nguid": "CC8CF2C9CBE94E568D2AD77A71123520", 00:18:48.868 "uuid": "cc8cf2c9-cbe9-4e56-8d2a-d77a71123520", 00:18:48.868 "no_auto_visible": false 00:18:48.868 } 00:18:48.868 } 00:18:48.868 }, 00:18:48.868 { 00:18:48.868 "method": "nvmf_subsystem_add_listener", 00:18:48.868 "params": { 00:18:48.868 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.868 "listen_address": { 00:18:48.868 "trtype": "TCP", 00:18:48.868 "adrfam": "IPv4", 00:18:48.868 "traddr": "10.0.0.2", 00:18:48.868 "trsvcid": "4420" 00:18:48.868 }, 00:18:48.868 "secure_channel": false, 00:18:48.868 "sock_impl": "ssl" 00:18:48.868 } 00:18:48.868 } 00:18:48.868 ] 00:18:48.868 } 00:18:48.868 ] 00:18:48.868 }' 00:18:48.868 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.868 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.868 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1495038 00:18:48.868 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:48.868 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1495038 00:18:48.868 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1495038 ']' 00:18:48.868 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.868 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.868 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.868 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.868 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.868 [2024-12-09 18:08:11.843391] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:18:48.868 [2024-12-09 18:08:11.843476] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.127 [2024-12-09 18:08:11.913671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.127 [2024-12-09 18:08:11.969540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.127 [2024-12-09 18:08:11.969616] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.127 [2024-12-09 18:08:11.969630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.127 [2024-12-09 18:08:11.969642] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.127 [2024-12-09 18:08:11.969666] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.127 [2024-12-09 18:08:11.970276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.387 [2024-12-09 18:08:12.213323] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.387 [2024-12-09 18:08:12.245363] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:49.387 [2024-12-09 18:08:12.245683] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.954 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.954 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:49.954 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:49.954 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:49.954 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.954 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.954 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1495186 00:18:49.954 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1495186 /var/tmp/bdevperf.sock 00:18:49.954 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1495186 ']' 00:18:49.954 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.954 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:49.954 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.954 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.954 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:49.954 "subsystems": [ 00:18:49.954 { 00:18:49.954 "subsystem": "keyring", 00:18:49.954 "config": [ 00:18:49.954 { 00:18:49.954 "method": "keyring_file_add_key", 00:18:49.954 "params": { 00:18:49.954 "name": "key0", 00:18:49.954 "path": "/tmp/tmp.z7LdbJtCei" 00:18:49.954 } 00:18:49.954 } 00:18:49.954 ] 00:18:49.954 }, 00:18:49.954 { 00:18:49.954 "subsystem": "iobuf", 00:18:49.954 "config": [ 00:18:49.954 { 00:18:49.954 "method": "iobuf_set_options", 00:18:49.954 "params": { 00:18:49.954 "small_pool_count": 8192, 00:18:49.954 "large_pool_count": 1024, 00:18:49.954 "small_bufsize": 8192, 00:18:49.954 "large_bufsize": 135168, 00:18:49.954 "enable_numa": false 00:18:49.954 } 00:18:49.954 } 00:18:49.954 ] 00:18:49.954 }, 00:18:49.954 { 00:18:49.954 "subsystem": "sock", 00:18:49.954 "config": [ 00:18:49.954 { 00:18:49.954 "method": "sock_set_default_impl", 00:18:49.954 "params": { 00:18:49.954 "impl_name": "posix" 00:18:49.954 } 00:18:49.954 }, 00:18:49.954 { 00:18:49.954 "method": "sock_impl_set_options", 00:18:49.954 "params": { 00:18:49.954 "impl_name": "ssl", 00:18:49.954 "recv_buf_size": 4096, 00:18:49.954 "send_buf_size": 4096, 00:18:49.954 "enable_recv_pipe": true, 00:18:49.954 "enable_quickack": false, 00:18:49.954 "enable_placement_id": 0, 00:18:49.954 "enable_zerocopy_send_server": true, 00:18:49.954 "enable_zerocopy_send_client": false, 00:18:49.954 "zerocopy_threshold": 0, 00:18:49.954 "tls_version": 0, 00:18:49.954 "enable_ktls": false 00:18:49.954 } 00:18:49.954 }, 00:18:49.954 { 00:18:49.954 "method": "sock_impl_set_options", 00:18:49.954 "params": { 00:18:49.954 "impl_name": "posix", 00:18:49.954 "recv_buf_size": 2097152, 00:18:49.954 "send_buf_size": 2097152, 00:18:49.954 "enable_recv_pipe": true, 00:18:49.954 "enable_quickack": false, 00:18:49.954 "enable_placement_id": 0, 00:18:49.954 "enable_zerocopy_send_server": true, 00:18:49.954 "enable_zerocopy_send_client": false, 00:18:49.954 "zerocopy_threshold": 0, 00:18:49.954 "tls_version": 0, 00:18:49.954 "enable_ktls": false 00:18:49.954 } 00:18:49.954 } 00:18:49.954 ] 00:18:49.954 }, 00:18:49.954 { 00:18:49.954 "subsystem": "vmd", 00:18:49.954 "config": [] 00:18:49.954 }, 00:18:49.954 { 00:18:49.954 "subsystem": "accel", 00:18:49.954 "config": [ 00:18:49.954 { 00:18:49.954 "method": "accel_set_options", 00:18:49.954 "params": { 00:18:49.954 "small_cache_size": 128, 00:18:49.954 "large_cache_size": 16, 00:18:49.954 "task_count": 2048, 00:18:49.954 "sequence_count": 2048, 00:18:49.954 "buf_count": 2048 00:18:49.954 } 00:18:49.954 } 00:18:49.954 ] 00:18:49.954 }, 00:18:49.954 { 00:18:49.954 "subsystem": "bdev", 00:18:49.954 "config": [ 00:18:49.954 { 00:18:49.954 "method": "bdev_set_options", 00:18:49.954 "params": { 00:18:49.954 "bdev_io_pool_size": 65535, 00:18:49.954 "bdev_io_cache_size": 256, 00:18:49.954 "bdev_auto_examine": true, 00:18:49.954 "iobuf_small_cache_size": 128, 00:18:49.954 "iobuf_large_cache_size": 16 00:18:49.954 } 00:18:49.954 }, 00:18:49.954 { 00:18:49.954 "method": "bdev_raid_set_options", 00:18:49.954 "params": { 00:18:49.954 "process_window_size_kb": 1024, 00:18:49.954 "process_max_bandwidth_mb_sec": 0 00:18:49.954 } 00:18:49.954 }, 00:18:49.954 { 00:18:49.954 "method": "bdev_iscsi_set_options", 00:18:49.954 "params": { 00:18:49.954 "timeout_sec": 30 00:18:49.954 } 00:18:49.954 }, 00:18:49.954 { 00:18:49.955 "method": "bdev_nvme_set_options", 00:18:49.955 "params": { 00:18:49.955 "action_on_timeout": "none", 00:18:49.955 "timeout_us": 0, 00:18:49.955 "timeout_admin_us": 0, 00:18:49.955 "keep_alive_timeout_ms": 10000, 00:18:49.955 "arbitration_burst": 0, 00:18:49.955 "low_priority_weight": 0, 00:18:49.955 "medium_priority_weight": 0, 00:18:49.955 "high_priority_weight": 0, 00:18:49.955 "nvme_adminq_poll_period_us": 10000, 00:18:49.955 "nvme_ioq_poll_period_us": 0, 00:18:49.955 "io_queue_requests": 512, 00:18:49.955 "delay_cmd_submit": true, 00:18:49.955 "transport_retry_count": 4, 00:18:49.955 "bdev_retry_count": 3, 00:18:49.955 "transport_ack_timeout": 0, 00:18:49.955 "ctrlr_loss_timeout_sec": 0, 00:18:49.955 "reconnect_delay_sec": 0, 00:18:49.955 "fast_io_fail_timeout_sec": 0, 00:18:49.955 "disable_auto_failback": false, 00:18:49.955 "generate_uuids": false, 00:18:49.955 "transport_tos": 0, 00:18:49.955 "nvme_error_stat": false, 00:18:49.955 "rdma_srq_size": 0, 00:18:49.955 "io_path_stat": false, 00:18:49.955 "allow_accel_sequence": false, 00:18:49.955 "rdma_max_cq_size": 0, 00:18:49.955 "rdma_cm_event_timeout_ms": 0, 00:18:49.955 "dhchap_digests": [ 00:18:49.955 "sha256", 00:18:49.955 "sha384", 00:18:49.955 "sha512" 00:18:49.955 ], 00:18:49.955 "dhchap_dhgroups": [ 00:18:49.955 "null", 00:18:49.955 "ffdhe2048", 00:18:49.955 "ffdhe3072", 00:18:49.955 "ffdhe4096", 00:18:49.955 "ffdhe6144", 00:18:49.955 "ffdhe8192" 00:18:49.955 ] 00:18:49.955 } 00:18:49.955 }, 00:18:49.955 { 00:18:49.955 "method": "bdev_nvme_attach_controller", 00:18:49.955 "params": { 00:18:49.955 "name": "nvme0", 00:18:49.955 "trtype": "TCP", 00:18:49.955 "adrfam": "IPv4", 00:18:49.955 "traddr": "10.0.0.2", 00:18:49.955 "trsvcid": "4420", 00:18:49.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.955 "prchk_reftag": false, 00:18:49.955 "prchk_guard": false, 00:18:49.955 "ctrlr_loss_timeout_sec": 0, 00:18:49.955 "reconnect_delay_sec": 0, 00:18:49.955 "fast_io_fail_timeout_sec": 0, 00:18:49.955 "psk": "key0", 00:18:49.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:49.955 "hdgst": false, 00:18:49.955 "ddgst": false, 00:18:49.955 "multipath": "multipath" 00:18:49.955 } 00:18:49.955 }, 00:18:49.955 { 00:18:49.955 "method": "bdev_nvme_set_hotplug", 00:18:49.955 "params": { 00:18:49.955 "period_us": 100000, 00:18:49.955 "enable": false 00:18:49.955 } 00:18:49.955 }, 00:18:49.955 { 00:18:49.955 "method": "bdev_enable_histogram", 00:18:49.955 "params": { 00:18:49.955 "name": "nvme0n1", 00:18:49.955 "enable": true 00:18:49.955 } 00:18:49.955 }, 00:18:49.955 { 00:18:49.955 "method": "bdev_wait_for_examine" 00:18:49.955 } 00:18:49.955 ] 00:18:49.955 }, 00:18:49.955 { 00:18:49.955 "subsystem": "nbd", 00:18:49.955 "config": [] 00:18:49.955 } 00:18:49.955 ] 00:18:49.955 }' 00:18:49.955 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.955 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.955 [2024-12-09 18:08:12.948702] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:18:49.955 [2024-12-09 18:08:12.948778] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495186 ] 00:18:50.213 [2024-12-09 18:08:13.014346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.213 [2024-12-09 18:08:13.070348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.472 [2024-12-09 18:08:13.253083] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:51.039 18:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.039 18:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:51.039 18:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:51.039 18:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:51.297 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.297 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:51.556 Running I/O for 1 seconds... 00:18:52.492 3218.00 IOPS, 12.57 MiB/s 00:18:52.492 Latency(us) 00:18:52.492 [2024-12-09T17:08:15.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.492 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:52.492 Verification LBA range: start 0x0 length 0x2000 00:18:52.492 nvme0n1 : 1.02 3286.01 12.84 0.00 0.00 38630.34 6650.69 52040.44 00:18:52.492 [2024-12-09T17:08:15.533Z] =================================================================================================================== 00:18:52.492 [2024-12-09T17:08:15.533Z] Total : 3286.01 12.84 0.00 0.00 38630.34 6650.69 52040.44 00:18:52.492 { 00:18:52.492 "results": [ 00:18:52.492 { 00:18:52.492 "job": "nvme0n1", 00:18:52.492 "core_mask": "0x2", 00:18:52.492 "workload": "verify", 00:18:52.492 "status": "finished", 00:18:52.492 "verify_range": { 00:18:52.492 "start": 0, 00:18:52.492 "length": 8192 00:18:52.492 }, 00:18:52.492 "queue_depth": 128, 00:18:52.492 "io_size": 4096, 00:18:52.492 "runtime": 1.018559, 00:18:52.492 "iops": 3286.0148503915825, 00:18:52.492 "mibps": 12.83599550934212, 00:18:52.492 "io_failed": 0, 00:18:52.492 "io_timeout": 0, 00:18:52.492 "avg_latency_us": 38630.34238024101, 00:18:52.492 "min_latency_us": 6650.69037037037, 00:18:52.492 "max_latency_us": 52040.43851851852 00:18:52.492 } 00:18:52.492 ], 00:18:52.492 "core_count": 1 00:18:52.492 } 00:18:52.492 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:52.492 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:52.492 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:52.492 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:52.492 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:52.492 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:52.492 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:52.492 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:52.492 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:52.492 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:52.492 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:52.492 nvmf_trace.0 00:18:52.492 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:52.492 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1495186 00:18:52.492 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1495186 ']' 00:18:52.492 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1495186 00:18:52.492 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.492 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.492 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1495186 00:18:52.492 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:52.492 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:52.492 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1495186' 00:18:52.492 killing process with pid 1495186 00:18:52.492 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1495186 00:18:52.493 Received shutdown signal, test time was about 1.000000 seconds 00:18:52.493 00:18:52.493 Latency(us) 00:18:52.493 [2024-12-09T17:08:15.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.493 [2024-12-09T17:08:15.534Z] =================================================================================================================== 00:18:52.493 [2024-12-09T17:08:15.534Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:52.493 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1495186 00:18:52.751 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:52.751 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:52.751 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:52.751 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:52.751 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:52.751 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:52.751 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:52.751 rmmod nvme_tcp 00:18:52.751 rmmod nvme_fabrics 00:18:52.751 rmmod nvme_keyring 00:18:52.751 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:52.751 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:52.751 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:52.751 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1495038 ']' 00:18:52.751 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1495038 00:18:52.751 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1495038 ']' 00:18:52.751 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1495038 00:18:52.751 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.751 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.751 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1495038 00:18:53.010 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:53.010 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:53.010 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1495038' 00:18:53.010 killing process with pid 1495038 00:18:53.010 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1495038 00:18:53.010 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1495038 00:18:53.270 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:53.270 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:53.270 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:53.270 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:53.270 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:53.270 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:53.270 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:53.270 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:53.270 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:53.270 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.270 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:53.270 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.177 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:55.177 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.TIlfI4CNX4 /tmp/tmp.RZ9wBSz2nP /tmp/tmp.z7LdbJtCei 00:18:55.177 00:18:55.177 real 1m23.606s 00:18:55.177 user 2m17.892s 00:18:55.177 sys 0m25.914s 00:18:55.177 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:55.177 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.177 ************************************ 00:18:55.177 END TEST nvmf_tls 00:18:55.177 ************************************ 00:18:55.177 18:08:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:55.177 18:08:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:55.177 18:08:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.177 18:08:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:55.177 ************************************ 00:18:55.177 START TEST nvmf_fips 00:18:55.177 ************************************ 00:18:55.177 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:55.177 * Looking for test storage... 00:18:55.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:55.177 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:55.177 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:18:55.177 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:55.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.436 --rc genhtml_branch_coverage=1 00:18:55.436 --rc genhtml_function_coverage=1 00:18:55.436 --rc genhtml_legend=1 00:18:55.436 --rc geninfo_all_blocks=1 00:18:55.436 --rc geninfo_unexecuted_blocks=1 00:18:55.436 00:18:55.436 ' 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:55.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.436 --rc genhtml_branch_coverage=1 00:18:55.436 --rc genhtml_function_coverage=1 00:18:55.436 --rc genhtml_legend=1 00:18:55.436 --rc geninfo_all_blocks=1 00:18:55.436 --rc geninfo_unexecuted_blocks=1 00:18:55.436 00:18:55.436 ' 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:55.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.436 --rc genhtml_branch_coverage=1 00:18:55.436 --rc genhtml_function_coverage=1 00:18:55.436 --rc genhtml_legend=1 00:18:55.436 --rc geninfo_all_blocks=1 00:18:55.436 --rc geninfo_unexecuted_blocks=1 00:18:55.436 00:18:55.436 ' 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:55.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.436 --rc genhtml_branch_coverage=1 00:18:55.436 --rc genhtml_function_coverage=1 00:18:55.436 --rc genhtml_legend=1 00:18:55.436 --rc geninfo_all_blocks=1 00:18:55.436 --rc geninfo_unexecuted_blocks=1 00:18:55.436 00:18:55.436 ' 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.436 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:55.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:55.437 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:55.438 Error setting digest 00:18:55.438 40C24CA10E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:55.438 40C24CA10E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:18:55.438 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:57.970 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:57.970 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.970 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:57.971 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:57.971 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:57.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:18:57.971 00:18:57.971 --- 10.0.0.2 ping statistics --- 00:18:57.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.971 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:57.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:18:57.971 00:18:57.971 --- 10.0.0.1 ping statistics --- 00:18:57.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.971 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1497555 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1497555 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1497555 ']' 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.971 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:57.971 [2024-12-09 18:08:20.838695] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:18:57.971 [2024-12-09 18:08:20.838789] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.971 [2024-12-09 18:08:20.910595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.971 [2024-12-09 18:08:20.963600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.971 [2024-12-09 18:08:20.963657] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.971 [2024-12-09 18:08:20.963685] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.971 [2024-12-09 18:08:20.963696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.971 [2024-12-09 18:08:20.963714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.971 [2024-12-09 18:08:20.964269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.230 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.230 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:58.230 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:58.230 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:58.230 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:58.230 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.230 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:58.230 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:58.230 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:58.230 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.wBi 00:18:58.230 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:58.230 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.wBi 00:18:58.230 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.wBi 00:18:58.230 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.wBi 00:18:58.230 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:58.488 [2024-12-09 18:08:21.399879] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.488 [2024-12-09 18:08:21.415872] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:58.488 [2024-12-09 18:08:21.416131] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.488 malloc0 00:18:58.488 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:58.488 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1497701 00:18:58.488 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:58.488 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1497701 /var/tmp/bdevperf.sock 00:18:58.488 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1497701 ']' 00:18:58.488 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.488 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.488 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.488 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.488 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:58.746 [2024-12-09 18:08:21.549885] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:18:58.746 [2024-12-09 18:08:21.549968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497701 ] 00:18:58.746 [2024-12-09 18:08:21.617488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.746 [2024-12-09 18:08:21.675521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.746 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.746 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:58.746 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.wBi 00:18:59.311 18:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:59.312 [2024-12-09 18:08:22.292802] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.571 TLSTESTn1 00:18:59.571 18:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:59.571 Running I/O for 10 seconds... 00:19:01.890 3469.00 IOPS, 13.55 MiB/s [2024-12-09T17:08:25.864Z] 3558.00 IOPS, 13.90 MiB/s [2024-12-09T17:08:26.801Z] 3582.33 IOPS, 13.99 MiB/s [2024-12-09T17:08:27.739Z] 3577.75 IOPS, 13.98 MiB/s [2024-12-09T17:08:28.679Z] 3593.20 IOPS, 14.04 MiB/s [2024-12-09T17:08:29.659Z] 3601.67 IOPS, 14.07 MiB/s [2024-12-09T17:08:30.598Z] 3584.57 IOPS, 14.00 MiB/s [2024-12-09T17:08:31.539Z] 3586.75 IOPS, 14.01 MiB/s [2024-12-09T17:08:32.921Z] 3582.33 IOPS, 13.99 MiB/s [2024-12-09T17:08:32.921Z] 3589.40 IOPS, 14.02 MiB/s 00:19:09.880 Latency(us) 00:19:09.880 [2024-12-09T17:08:32.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.880 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:09.880 Verification LBA range: start 0x0 length 0x2000 00:19:09.880 TLSTESTn1 : 10.02 3595.01 14.04 0.00 0.00 35546.97 6893.42 52040.44 00:19:09.880 [2024-12-09T17:08:32.921Z] =================================================================================================================== 00:19:09.880 [2024-12-09T17:08:32.921Z] Total : 3595.01 14.04 0.00 0.00 35546.97 6893.42 52040.44 00:19:09.880 { 00:19:09.880 "results": [ 00:19:09.880 { 00:19:09.880 "job": "TLSTESTn1", 00:19:09.880 "core_mask": "0x4", 00:19:09.880 "workload": "verify", 00:19:09.880 "status": "finished", 00:19:09.880 "verify_range": { 00:19:09.880 "start": 0, 00:19:09.880 "length": 8192 00:19:09.880 }, 00:19:09.880 "queue_depth": 128, 00:19:09.880 "io_size": 4096, 00:19:09.880 "runtime": 10.019177, 00:19:09.880 "iops": 3595.005857267518, 00:19:09.880 "mibps": 14.042991629951242, 00:19:09.880 "io_failed": 0, 00:19:09.880 "io_timeout": 0, 00:19:09.880 "avg_latency_us": 35546.966221490096, 00:19:09.880 "min_latency_us": 6893.416296296296, 00:19:09.880 "max_latency_us": 52040.43851851852 00:19:09.880 } 00:19:09.880 ], 00:19:09.880 "core_count": 1 00:19:09.880 } 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:09.880 nvmf_trace.0 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1497701 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1497701 ']' 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1497701 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1497701 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1497701' 00:19:09.880 killing process with pid 1497701 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1497701 00:19:09.880 Received shutdown signal, test time was about 10.000000 seconds 00:19:09.880 00:19:09.880 Latency(us) 00:19:09.880 [2024-12-09T17:08:32.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.880 [2024-12-09T17:08:32.921Z] =================================================================================================================== 00:19:09.880 [2024-12-09T17:08:32.921Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1497701 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:09.880 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:09.880 rmmod nvme_tcp 00:19:09.880 rmmod nvme_fabrics 00:19:10.140 rmmod nvme_keyring 00:19:10.140 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:10.140 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:10.140 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:10.140 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1497555 ']' 00:19:10.140 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1497555 00:19:10.140 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1497555 ']' 00:19:10.140 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1497555 00:19:10.140 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:10.140 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.140 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1497555 00:19:10.140 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:10.140 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:10.140 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1497555' 00:19:10.140 killing process with pid 1497555 00:19:10.140 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1497555 00:19:10.140 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1497555 00:19:10.401 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:10.401 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:10.401 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:10.401 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:10.401 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:10.401 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:10.401 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:10.401 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:10.401 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:10.401 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.401 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:10.401 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.308 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:12.308 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.wBi 00:19:12.308 00:19:12.308 real 0m17.139s 00:19:12.308 user 0m22.726s 00:19:12.308 sys 0m5.406s 00:19:12.308 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.308 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:12.308 ************************************ 00:19:12.308 END TEST nvmf_fips 00:19:12.308 ************************************ 00:19:12.308 18:08:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:12.308 18:08:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:12.308 18:08:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:12.308 18:08:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:12.308 ************************************ 00:19:12.308 START TEST nvmf_control_msg_list 00:19:12.308 ************************************ 00:19:12.308 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:12.566 * Looking for test storage... 00:19:12.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:12.566 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:12.566 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:19:12.566 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:12.566 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:12.566 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:12.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.567 --rc genhtml_branch_coverage=1 00:19:12.567 --rc genhtml_function_coverage=1 00:19:12.567 --rc genhtml_legend=1 00:19:12.567 --rc geninfo_all_blocks=1 00:19:12.567 --rc geninfo_unexecuted_blocks=1 00:19:12.567 00:19:12.567 ' 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:12.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.567 --rc genhtml_branch_coverage=1 00:19:12.567 --rc genhtml_function_coverage=1 00:19:12.567 --rc genhtml_legend=1 00:19:12.567 --rc geninfo_all_blocks=1 00:19:12.567 --rc geninfo_unexecuted_blocks=1 00:19:12.567 00:19:12.567 ' 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:12.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.567 --rc genhtml_branch_coverage=1 00:19:12.567 --rc genhtml_function_coverage=1 00:19:12.567 --rc genhtml_legend=1 00:19:12.567 --rc geninfo_all_blocks=1 00:19:12.567 --rc geninfo_unexecuted_blocks=1 00:19:12.567 00:19:12.567 ' 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:12.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.567 --rc genhtml_branch_coverage=1 00:19:12.567 --rc genhtml_function_coverage=1 00:19:12.567 --rc genhtml_legend=1 00:19:12.567 --rc geninfo_all_blocks=1 00:19:12.567 --rc geninfo_unexecuted_blocks=1 00:19:12.567 00:19:12.567 ' 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:12.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:12.567 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.568 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:12.568 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.568 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:12.568 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:12.568 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:12.568 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:15.100 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:15.101 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:15.101 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:15.101 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:15.101 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:15.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:15.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:19:15.101 00:19:15.101 --- 10.0.0.2 ping statistics --- 00:19:15.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.101 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:15.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:15.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:19:15.101 00:19:15.101 --- 10.0.0.1 ping statistics --- 00:19:15.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.101 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1500995 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1500995 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1500995 ']' 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.101 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:15.101 [2024-12-09 18:08:37.803726] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:19:15.102 [2024-12-09 18:08:37.803797] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.102 [2024-12-09 18:08:37.873551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.102 [2024-12-09 18:08:37.932397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.102 [2024-12-09 18:08:37.932452] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.102 [2024-12-09 18:08:37.932480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.102 [2024-12-09 18:08:37.932491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.102 [2024-12-09 18:08:37.932501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.102 [2024-12-09 18:08:37.933169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:15.102 [2024-12-09 18:08:38.085404] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:15.102 Malloc0 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:15.102 [2024-12-09 18:08:38.125438] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1501016 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1501017 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1501018 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1501016 00:19:15.102 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:15.360 [2024-12-09 18:08:38.183935] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:15.360 [2024-12-09 18:08:38.193935] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:15.360 [2024-12-09 18:08:38.194156] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:16.296 Initializing NVMe Controllers 00:19:16.296 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:16.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:16.297 Initialization complete. Launching workers. 00:19:16.297 ======================================================== 00:19:16.297 Latency(us) 00:19:16.297 Device Information : IOPS MiB/s Average min max 00:19:16.297 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 27.00 0.11 37864.45 334.78 41030.90 00:19:16.297 ======================================================== 00:19:16.297 Total : 27.00 0.11 37864.45 334.78 41030.90 00:19:16.297 00:19:16.297 Initializing NVMe Controllers 00:19:16.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:16.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:16.297 Initialization complete. Launching workers. 00:19:16.297 ======================================================== 00:19:16.297 Latency(us) 00:19:16.297 Device Information : IOPS MiB/s Average min max 00:19:16.297 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4399.98 17.19 226.88 153.54 447.48 00:19:16.297 ======================================================== 00:19:16.297 Total : 4399.98 17.19 226.88 153.54 447.48 00:19:16.297 00:19:16.297 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1501017 00:19:16.555 Initializing NVMe Controllers 00:19:16.555 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:16.555 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:16.555 Initialization complete. Launching workers. 00:19:16.555 ======================================================== 00:19:16.555 Latency(us) 00:19:16.555 Device Information : IOPS MiB/s Average min max 00:19:16.555 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4265.00 16.66 234.09 155.51 532.28 00:19:16.555 ======================================================== 00:19:16.555 Total : 4265.00 16.66 234.09 155.51 532.28 00:19:16.555 00:19:16.555 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1501018 00:19:16.555 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:16.555 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:16.555 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:16.555 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:16.555 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:16.555 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:16.555 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:16.555 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:16.555 rmmod nvme_tcp 00:19:16.555 rmmod nvme_fabrics 00:19:16.555 rmmod nvme_keyring 00:19:16.555 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:16.555 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:16.556 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:16.556 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1500995 ']' 00:19:16.556 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1500995 00:19:16.556 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1500995 ']' 00:19:16.556 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1500995 00:19:16.556 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:16.556 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.556 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1500995 00:19:16.556 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:16.556 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:16.556 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1500995' 00:19:16.556 killing process with pid 1500995 00:19:16.556 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1500995 00:19:16.556 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1500995 00:19:16.816 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:16.816 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:16.816 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:16.816 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:16.816 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:16.816 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:16.816 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:16.816 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:16.816 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:16.816 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.816 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:16.816 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.724 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:18.724 00:19:18.724 real 0m6.353s 00:19:18.724 user 0m5.431s 00:19:18.724 sys 0m2.705s 00:19:18.724 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.724 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:18.724 ************************************ 00:19:18.724 END TEST nvmf_control_msg_list 00:19:18.724 ************************************ 00:19:18.724 18:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:18.724 18:08:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:18.724 18:08:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.724 18:08:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:18.724 ************************************ 00:19:18.724 START TEST nvmf_wait_for_buf 00:19:18.724 ************************************ 00:19:18.724 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:18.982 * Looking for test storage... 00:19:18.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:18.982 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:18.982 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:18.982 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:18.982 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:18.982 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:18.982 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:18.982 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:18.982 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:18.982 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:18.982 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:18.982 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:18.982 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:18.982 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:18.982 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:18.982 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:18.982 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:18.982 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:18.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.983 --rc genhtml_branch_coverage=1 00:19:18.983 --rc genhtml_function_coverage=1 00:19:18.983 --rc genhtml_legend=1 00:19:18.983 --rc geninfo_all_blocks=1 00:19:18.983 --rc geninfo_unexecuted_blocks=1 00:19:18.983 00:19:18.983 ' 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:18.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.983 --rc genhtml_branch_coverage=1 00:19:18.983 --rc genhtml_function_coverage=1 00:19:18.983 --rc genhtml_legend=1 00:19:18.983 --rc geninfo_all_blocks=1 00:19:18.983 --rc geninfo_unexecuted_blocks=1 00:19:18.983 00:19:18.983 ' 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:18.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.983 --rc genhtml_branch_coverage=1 00:19:18.983 --rc genhtml_function_coverage=1 00:19:18.983 --rc genhtml_legend=1 00:19:18.983 --rc geninfo_all_blocks=1 00:19:18.983 --rc geninfo_unexecuted_blocks=1 00:19:18.983 00:19:18.983 ' 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:18.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.983 --rc genhtml_branch_coverage=1 00:19:18.983 --rc genhtml_function_coverage=1 00:19:18.983 --rc genhtml_legend=1 00:19:18.983 --rc geninfo_all_blocks=1 00:19:18.983 --rc geninfo_unexecuted_blocks=1 00:19:18.983 00:19:18.983 ' 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:18.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:18.983 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:21.515 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:21.515 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:21.515 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:21.515 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:21.515 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:21.515 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:21.515 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:21.515 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:21.515 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:21.515 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:21.516 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:21.516 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:21.516 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:21.516 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:21.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:19:21.516 00:19:21.516 --- 10.0.0.2 ping statistics --- 00:19:21.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.516 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:21.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:19:21.516 00:19:21.516 --- 10.0.0.1 ping statistics --- 00:19:21.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.516 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1503212 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1503212 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1503212 ']' 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:21.516 [2024-12-09 18:08:44.266861] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:19:21.516 [2024-12-09 18:08:44.266935] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.516 [2024-12-09 18:08:44.340249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.516 [2024-12-09 18:08:44.400307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.516 [2024-12-09 18:08:44.400381] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.516 [2024-12-09 18:08:44.400410] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:21.516 [2024-12-09 18:08:44.400422] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:21.516 [2024-12-09 18:08:44.400432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.516 [2024-12-09 18:08:44.401150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.516 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:21.775 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.775 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:21.775 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.775 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:21.775 Malloc0 00:19:21.775 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.775 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:21.775 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.775 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:21.775 [2024-12-09 18:08:44.653654] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.775 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.775 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:21.775 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.775 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:21.775 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.775 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:21.775 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.775 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:21.775 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.775 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:21.775 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.775 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:21.775 [2024-12-09 18:08:44.677865] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.775 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.775 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:21.775 [2024-12-09 18:08:44.766993] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:23.680 Initializing NVMe Controllers 00:19:23.680 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:23.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:23.680 Initialization complete. Launching workers. 00:19:23.680 ======================================================== 00:19:23.680 Latency(us) 00:19:23.680 Device Information : IOPS MiB/s Average min max 00:19:23.680 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 46.88 5.86 88801.10 23998.81 151644.29 00:19:23.680 ======================================================== 00:19:23.680 Total : 46.88 5.86 88801.10 23998.81 151644.29 00:19:23.680 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=726 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 726 -eq 0 ]] 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:23.680 rmmod nvme_tcp 00:19:23.680 rmmod nvme_fabrics 00:19:23.680 rmmod nvme_keyring 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1503212 ']' 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1503212 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1503212 ']' 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1503212 00:19:23.680 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:23.681 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.681 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1503212 00:19:23.681 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:23.681 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:23.681 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1503212' 00:19:23.681 killing process with pid 1503212 00:19:23.681 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1503212 00:19:23.681 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1503212 00:19:23.681 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:23.681 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:23.681 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:23.681 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:23.681 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:23.681 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:23.681 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:23.681 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:23.681 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:23.681 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.681 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:23.681 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.219 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:26.219 00:19:26.219 real 0m6.884s 00:19:26.219 user 0m3.281s 00:19:26.219 sys 0m2.069s 00:19:26.219 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.219 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:26.219 ************************************ 00:19:26.219 END TEST nvmf_wait_for_buf 00:19:26.219 ************************************ 00:19:26.219 18:08:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:26.219 18:08:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:26.219 18:08:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:26.219 18:08:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:26.219 18:08:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:26.219 18:08:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:28.122 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:28.122 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:28.122 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:28.122 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:28.123 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:28.123 ************************************ 00:19:28.123 START TEST nvmf_perf_adq 00:19:28.123 ************************************ 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:28.123 * Looking for test storage... 00:19:28.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:28.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.123 --rc genhtml_branch_coverage=1 00:19:28.123 --rc genhtml_function_coverage=1 00:19:28.123 --rc genhtml_legend=1 00:19:28.123 --rc geninfo_all_blocks=1 00:19:28.123 --rc geninfo_unexecuted_blocks=1 00:19:28.123 00:19:28.123 ' 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:28.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.123 --rc genhtml_branch_coverage=1 00:19:28.123 --rc genhtml_function_coverage=1 00:19:28.123 --rc genhtml_legend=1 00:19:28.123 --rc geninfo_all_blocks=1 00:19:28.123 --rc geninfo_unexecuted_blocks=1 00:19:28.123 00:19:28.123 ' 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:28.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.123 --rc genhtml_branch_coverage=1 00:19:28.123 --rc genhtml_function_coverage=1 00:19:28.123 --rc genhtml_legend=1 00:19:28.123 --rc geninfo_all_blocks=1 00:19:28.123 --rc geninfo_unexecuted_blocks=1 00:19:28.123 00:19:28.123 ' 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:28.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.123 --rc genhtml_branch_coverage=1 00:19:28.123 --rc genhtml_function_coverage=1 00:19:28.123 --rc genhtml_legend=1 00:19:28.123 --rc geninfo_all_blocks=1 00:19:28.123 --rc geninfo_unexecuted_blocks=1 00:19:28.123 00:19:28.123 ' 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.123 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:28.124 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.124 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:28.124 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:28.124 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:28.124 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.124 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.124 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.124 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:28.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:28.124 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:28.124 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:28.124 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:28.124 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:28.124 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:28.124 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:30.658 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:30.658 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:30.658 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:30.658 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:30.658 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:30.917 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:33.450 18:08:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:38.885 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:19:38.885 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:38.885 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:38.885 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:38.885 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:38.885 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:38.885 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.885 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.885 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.885 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:38.885 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:38.885 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:38.885 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.885 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:38.885 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:38.885 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:38.885 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:38.885 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:38.885 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:38.885 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:38.885 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:38.885 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:38.886 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:38.886 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:38.886 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:38.886 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:38.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:38.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:19:38.886 00:19:38.886 --- 10.0.0.2 ping statistics --- 00:19:38.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.886 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:38.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:38.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:19:38.886 00:19:38.886 --- 10.0.0.1 ping statistics --- 00:19:38.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.886 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.886 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1508038 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1508038 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1508038 ']' 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.887 [2024-12-09 18:09:01.344930] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:19:38.887 [2024-12-09 18:09:01.345009] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.887 [2024-12-09 18:09:01.427725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:38.887 [2024-12-09 18:09:01.489437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.887 [2024-12-09 18:09:01.489489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.887 [2024-12-09 18:09:01.489504] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.887 [2024-12-09 18:09:01.489515] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.887 [2024-12-09 18:09:01.489525] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.887 [2024-12-09 18:09:01.491173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.887 [2024-12-09 18:09:01.491197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:38.887 [2024-12-09 18:09:01.491254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:38.887 [2024-12-09 18:09:01.491257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.887 [2024-12-09 18:09:01.766948] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.887 Malloc1 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.887 [2024-12-09 18:09:01.839990] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1508200 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:19:38.887 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:41.425 18:09:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:19:41.425 18:09:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.425 18:09:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:41.425 18:09:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.425 18:09:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:19:41.425 "tick_rate": 2700000000, 00:19:41.425 "poll_groups": [ 00:19:41.425 { 00:19:41.425 "name": "nvmf_tgt_poll_group_000", 00:19:41.425 "admin_qpairs": 1, 00:19:41.425 "io_qpairs": 1, 00:19:41.425 "current_admin_qpairs": 1, 00:19:41.425 "current_io_qpairs": 1, 00:19:41.425 "pending_bdev_io": 0, 00:19:41.425 "completed_nvme_io": 19859, 00:19:41.425 "transports": [ 00:19:41.425 { 00:19:41.425 "trtype": "TCP" 00:19:41.425 } 00:19:41.425 ] 00:19:41.425 }, 00:19:41.425 { 00:19:41.425 "name": "nvmf_tgt_poll_group_001", 00:19:41.425 "admin_qpairs": 0, 00:19:41.425 "io_qpairs": 1, 00:19:41.425 "current_admin_qpairs": 0, 00:19:41.425 "current_io_qpairs": 1, 00:19:41.425 "pending_bdev_io": 0, 00:19:41.425 "completed_nvme_io": 19843, 00:19:41.425 "transports": [ 00:19:41.425 { 00:19:41.425 "trtype": "TCP" 00:19:41.425 } 00:19:41.425 ] 00:19:41.425 }, 00:19:41.425 { 00:19:41.425 "name": "nvmf_tgt_poll_group_002", 00:19:41.425 "admin_qpairs": 0, 00:19:41.425 "io_qpairs": 1, 00:19:41.425 "current_admin_qpairs": 0, 00:19:41.425 "current_io_qpairs": 1, 00:19:41.425 "pending_bdev_io": 0, 00:19:41.425 "completed_nvme_io": 19964, 00:19:41.425 "transports": [ 00:19:41.425 { 00:19:41.425 "trtype": "TCP" 00:19:41.425 } 00:19:41.425 ] 00:19:41.425 }, 00:19:41.425 { 00:19:41.425 "name": "nvmf_tgt_poll_group_003", 00:19:41.425 "admin_qpairs": 0, 00:19:41.425 "io_qpairs": 1, 00:19:41.425 "current_admin_qpairs": 0, 00:19:41.425 "current_io_qpairs": 1, 00:19:41.425 "pending_bdev_io": 0, 00:19:41.425 "completed_nvme_io": 19679, 00:19:41.425 "transports": [ 00:19:41.425 { 00:19:41.425 "trtype": "TCP" 00:19:41.425 } 00:19:41.425 ] 00:19:41.425 } 00:19:41.425 ] 00:19:41.425 }' 00:19:41.425 18:09:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:41.425 18:09:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:19:41.425 18:09:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:19:41.425 18:09:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:19:41.425 18:09:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1508200 00:19:49.539 Initializing NVMe Controllers 00:19:49.539 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:49.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:49.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:49.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:49.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:49.539 Initialization complete. Launching workers. 00:19:49.539 ======================================================== 00:19:49.539 Latency(us) 00:19:49.539 Device Information : IOPS MiB/s Average min max 00:19:49.539 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10280.70 40.16 6227.05 2431.71 10230.44 00:19:49.539 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10484.20 40.95 6103.91 2251.24 10712.09 00:19:49.539 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10521.90 41.10 6084.04 2512.40 9964.84 00:19:49.539 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10412.70 40.67 6147.34 2539.80 10690.39 00:19:49.539 ======================================================== 00:19:49.539 Total : 41699.49 162.89 6140.10 2251.24 10712.09 00:19:49.539 00:19:49.539 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:19:49.539 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:49.539 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:19:49.539 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:49.539 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:19:49.539 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:49.539 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:49.539 rmmod nvme_tcp 00:19:49.539 rmmod nvme_fabrics 00:19:49.539 rmmod nvme_keyring 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1508038 ']' 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1508038 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1508038 ']' 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1508038 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1508038 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1508038' 00:19:49.539 killing process with pid 1508038 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1508038 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1508038 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:49.539 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.448 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:51.448 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:19:51.448 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:51.448 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:52.381 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:54.917 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:00.194 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:00.194 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:00.194 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.194 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:00.194 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:00.194 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:00.194 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.194 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.194 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:00.195 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:00.195 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:00.195 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:00.195 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:00.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:20:00.195 00:20:00.195 --- 10.0.0.2 ping statistics --- 00:20:00.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.195 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:00.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:20:00.195 00:20:00.195 --- 10.0.0.1 ping statistics --- 00:20:00.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.195 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:20:00.195 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:00.196 net.core.busy_poll = 1 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:00.196 net.core.busy_read = 1 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1511339 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1511339 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1511339 ']' 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.196 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.196 [2024-12-09 18:09:23.034369] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:20:00.196 [2024-12-09 18:09:23.034452] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.196 [2024-12-09 18:09:23.110347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:00.196 [2024-12-09 18:09:23.171157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.196 [2024-12-09 18:09:23.171220] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.196 [2024-12-09 18:09:23.171250] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.196 [2024-12-09 18:09:23.171261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.196 [2024-12-09 18:09:23.171270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.196 [2024-12-09 18:09:23.172965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.196 [2024-12-09 18:09:23.173030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.196 [2024-12-09 18:09:23.173098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:00.196 [2024-12-09 18:09:23.173101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.454 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.454 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:00.454 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:00.454 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:00.454 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.455 [2024-12-09 18:09:23.456019] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.455 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.713 Malloc1 00:20:00.713 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.713 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:00.713 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.713 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.713 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.713 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:00.713 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.713 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.713 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.713 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:00.713 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.713 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.713 [2024-12-09 18:09:23.529646] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.713 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.713 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1511477 00:20:00.713 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:00.713 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:02.617 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:02.617 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.617 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:02.617 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.617 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:02.617 "tick_rate": 2700000000, 00:20:02.617 "poll_groups": [ 00:20:02.617 { 00:20:02.617 "name": "nvmf_tgt_poll_group_000", 00:20:02.617 "admin_qpairs": 1, 00:20:02.617 "io_qpairs": 3, 00:20:02.617 "current_admin_qpairs": 1, 00:20:02.617 "current_io_qpairs": 3, 00:20:02.617 "pending_bdev_io": 0, 00:20:02.617 "completed_nvme_io": 26061, 00:20:02.617 "transports": [ 00:20:02.617 { 00:20:02.617 "trtype": "TCP" 00:20:02.617 } 00:20:02.617 ] 00:20:02.617 }, 00:20:02.617 { 00:20:02.617 "name": "nvmf_tgt_poll_group_001", 00:20:02.617 "admin_qpairs": 0, 00:20:02.617 "io_qpairs": 1, 00:20:02.617 "current_admin_qpairs": 0, 00:20:02.617 "current_io_qpairs": 1, 00:20:02.617 "pending_bdev_io": 0, 00:20:02.617 "completed_nvme_io": 24827, 00:20:02.617 "transports": [ 00:20:02.617 { 00:20:02.617 "trtype": "TCP" 00:20:02.617 } 00:20:02.617 ] 00:20:02.617 }, 00:20:02.617 { 00:20:02.617 "name": "nvmf_tgt_poll_group_002", 00:20:02.617 "admin_qpairs": 0, 00:20:02.617 "io_qpairs": 0, 00:20:02.617 "current_admin_qpairs": 0, 00:20:02.617 "current_io_qpairs": 0, 00:20:02.617 "pending_bdev_io": 0, 00:20:02.617 "completed_nvme_io": 0, 00:20:02.617 "transports": [ 00:20:02.617 { 00:20:02.617 "trtype": "TCP" 00:20:02.617 } 00:20:02.617 ] 00:20:02.617 }, 00:20:02.617 { 00:20:02.617 "name": "nvmf_tgt_poll_group_003", 00:20:02.617 "admin_qpairs": 0, 00:20:02.617 "io_qpairs": 0, 00:20:02.617 "current_admin_qpairs": 0, 00:20:02.617 "current_io_qpairs": 0, 00:20:02.617 "pending_bdev_io": 0, 00:20:02.617 "completed_nvme_io": 0, 00:20:02.617 "transports": [ 00:20:02.617 { 00:20:02.617 "trtype": "TCP" 00:20:02.617 } 00:20:02.617 ] 00:20:02.617 } 00:20:02.617 ] 00:20:02.617 }' 00:20:02.617 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:02.617 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:02.617 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:02.617 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:02.617 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1511477 00:20:10.733 Initializing NVMe Controllers 00:20:10.733 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:10.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:10.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:10.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:10.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:10.733 Initialization complete. Launching workers. 00:20:10.733 ======================================================== 00:20:10.733 Latency(us) 00:20:10.733 Device Information : IOPS MiB/s Average min max 00:20:10.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 3858.80 15.07 16590.01 1686.84 62117.67 00:20:10.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13211.90 51.61 4844.95 1488.21 47368.00 00:20:10.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5378.10 21.01 11941.51 1811.46 58557.84 00:20:10.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4522.60 17.67 14173.40 1654.89 61991.98 00:20:10.733 ======================================================== 00:20:10.733 Total : 26971.39 105.36 9504.58 1488.21 62117.67 00:20:10.733 00:20:10.733 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:10.733 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:10.733 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:10.733 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:10.733 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:10.733 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:10.733 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:10.733 rmmod nvme_tcp 00:20:10.733 rmmod nvme_fabrics 00:20:10.733 rmmod nvme_keyring 00:20:10.733 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:10.991 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:10.991 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:10.991 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1511339 ']' 00:20:10.991 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1511339 00:20:10.991 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1511339 ']' 00:20:10.991 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1511339 00:20:10.991 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:10.991 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.991 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1511339 00:20:10.991 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:10.991 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:10.991 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1511339' 00:20:10.991 killing process with pid 1511339 00:20:10.991 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1511339 00:20:10.991 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1511339 00:20:11.248 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:11.248 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:11.248 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:11.248 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:11.248 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:11.248 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:11.248 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:11.248 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:11.248 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:11.248 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.248 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.248 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:14.535 00:20:14.535 real 0m46.269s 00:20:14.535 user 2m40.124s 00:20:14.535 sys 0m9.713s 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.535 ************************************ 00:20:14.535 END TEST nvmf_perf_adq 00:20:14.535 ************************************ 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:14.535 ************************************ 00:20:14.535 START TEST nvmf_shutdown 00:20:14.535 ************************************ 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:14.535 * Looking for test storage... 00:20:14.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:14.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.535 --rc genhtml_branch_coverage=1 00:20:14.535 --rc genhtml_function_coverage=1 00:20:14.535 --rc genhtml_legend=1 00:20:14.535 --rc geninfo_all_blocks=1 00:20:14.535 --rc geninfo_unexecuted_blocks=1 00:20:14.535 00:20:14.535 ' 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:14.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.535 --rc genhtml_branch_coverage=1 00:20:14.535 --rc genhtml_function_coverage=1 00:20:14.535 --rc genhtml_legend=1 00:20:14.535 --rc geninfo_all_blocks=1 00:20:14.535 --rc geninfo_unexecuted_blocks=1 00:20:14.535 00:20:14.535 ' 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:14.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.535 --rc genhtml_branch_coverage=1 00:20:14.535 --rc genhtml_function_coverage=1 00:20:14.535 --rc genhtml_legend=1 00:20:14.535 --rc geninfo_all_blocks=1 00:20:14.535 --rc geninfo_unexecuted_blocks=1 00:20:14.535 00:20:14.535 ' 00:20:14.535 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:14.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.536 --rc genhtml_branch_coverage=1 00:20:14.536 --rc genhtml_function_coverage=1 00:20:14.536 --rc genhtml_legend=1 00:20:14.536 --rc geninfo_all_blocks=1 00:20:14.536 --rc geninfo_unexecuted_blocks=1 00:20:14.536 00:20:14.536 ' 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:14.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:14.536 ************************************ 00:20:14.536 START TEST nvmf_shutdown_tc1 00:20:14.536 ************************************ 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:14.536 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:16.442 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:16.442 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:16.442 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:16.442 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:16.442 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:16.443 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:16.443 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:16.443 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:16.443 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:16.443 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:16.443 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:16.443 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:16.443 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:16.443 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:16.443 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:16.443 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:16.443 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:16.443 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:16.443 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.443 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:16.443 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:16.443 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:16.443 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:16.443 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:16.443 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:16.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:20:16.701 00:20:16.701 --- 10.0.0.2 ping statistics --- 00:20:16.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.701 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:16.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:20:16.701 00:20:16.701 --- 10.0.0.1 ping statistics --- 00:20:16.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.701 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1514769 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1514769 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1514769 ']' 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.701 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:16.701 [2024-12-09 18:09:39.618770] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:20:16.701 [2024-12-09 18:09:39.618855] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.701 [2024-12-09 18:09:39.691876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:16.961 [2024-12-09 18:09:39.748145] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.961 [2024-12-09 18:09:39.748201] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.961 [2024-12-09 18:09:39.748229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.961 [2024-12-09 18:09:39.748240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.961 [2024-12-09 18:09:39.748249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.961 [2024-12-09 18:09:39.750068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.961 [2024-12-09 18:09:39.750130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:16.961 [2024-12-09 18:09:39.750193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:16.961 [2024-12-09 18:09:39.750197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:16.961 [2024-12-09 18:09:39.899677] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:16.961 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:16.962 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:16.962 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:16.962 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:16.962 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:16.962 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:16.962 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:16.962 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:16.962 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:16.962 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:16.962 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:16.962 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:16.962 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:16.962 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.962 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:16.962 Malloc1 00:20:17.220 [2024-12-09 18:09:40.003990] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.220 Malloc2 00:20:17.220 Malloc3 00:20:17.220 Malloc4 00:20:17.220 Malloc5 00:20:17.220 Malloc6 00:20:17.479 Malloc7 00:20:17.479 Malloc8 00:20:17.479 Malloc9 00:20:17.479 Malloc10 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1514947 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1514947 /var/tmp/bdevperf.sock 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1514947 ']' 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:17.479 { 00:20:17.479 "params": { 00:20:17.479 "name": "Nvme$subsystem", 00:20:17.479 "trtype": "$TEST_TRANSPORT", 00:20:17.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.479 "adrfam": "ipv4", 00:20:17.479 "trsvcid": "$NVMF_PORT", 00:20:17.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.479 "hdgst": ${hdgst:-false}, 00:20:17.479 "ddgst": ${ddgst:-false} 00:20:17.479 }, 00:20:17.479 "method": "bdev_nvme_attach_controller" 00:20:17.479 } 00:20:17.479 EOF 00:20:17.479 )") 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:17.479 { 00:20:17.479 "params": { 00:20:17.479 "name": "Nvme$subsystem", 00:20:17.479 "trtype": "$TEST_TRANSPORT", 00:20:17.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.479 "adrfam": "ipv4", 00:20:17.479 "trsvcid": "$NVMF_PORT", 00:20:17.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.479 "hdgst": ${hdgst:-false}, 00:20:17.479 "ddgst": ${ddgst:-false} 00:20:17.479 }, 00:20:17.479 "method": "bdev_nvme_attach_controller" 00:20:17.479 } 00:20:17.479 EOF 00:20:17.479 )") 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:17.479 { 00:20:17.479 "params": { 00:20:17.479 "name": "Nvme$subsystem", 00:20:17.479 "trtype": "$TEST_TRANSPORT", 00:20:17.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.479 "adrfam": "ipv4", 00:20:17.479 "trsvcid": "$NVMF_PORT", 00:20:17.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.479 "hdgst": ${hdgst:-false}, 00:20:17.479 "ddgst": ${ddgst:-false} 00:20:17.479 }, 00:20:17.479 "method": "bdev_nvme_attach_controller" 00:20:17.479 } 00:20:17.479 EOF 00:20:17.479 )") 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:17.479 { 00:20:17.479 "params": { 00:20:17.479 "name": "Nvme$subsystem", 00:20:17.479 "trtype": "$TEST_TRANSPORT", 00:20:17.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.479 "adrfam": "ipv4", 00:20:17.479 "trsvcid": "$NVMF_PORT", 00:20:17.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.479 "hdgst": ${hdgst:-false}, 00:20:17.479 "ddgst": ${ddgst:-false} 00:20:17.479 }, 00:20:17.479 "method": "bdev_nvme_attach_controller" 00:20:17.479 } 00:20:17.479 EOF 00:20:17.479 )") 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:17.479 { 00:20:17.479 "params": { 00:20:17.479 "name": "Nvme$subsystem", 00:20:17.479 "trtype": "$TEST_TRANSPORT", 00:20:17.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.479 "adrfam": "ipv4", 00:20:17.479 "trsvcid": "$NVMF_PORT", 00:20:17.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.479 "hdgst": ${hdgst:-false}, 00:20:17.479 "ddgst": ${ddgst:-false} 00:20:17.479 }, 00:20:17.479 "method": "bdev_nvme_attach_controller" 00:20:17.479 } 00:20:17.479 EOF 00:20:17.479 )") 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:17.479 { 00:20:17.479 "params": { 00:20:17.479 "name": "Nvme$subsystem", 00:20:17.479 "trtype": "$TEST_TRANSPORT", 00:20:17.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.479 "adrfam": "ipv4", 00:20:17.479 "trsvcid": "$NVMF_PORT", 00:20:17.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.479 "hdgst": ${hdgst:-false}, 00:20:17.479 "ddgst": ${ddgst:-false} 00:20:17.479 }, 00:20:17.479 "method": "bdev_nvme_attach_controller" 00:20:17.479 } 00:20:17.479 EOF 00:20:17.479 )") 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:17.479 { 00:20:17.479 "params": { 00:20:17.479 "name": "Nvme$subsystem", 00:20:17.479 "trtype": "$TEST_TRANSPORT", 00:20:17.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.479 "adrfam": "ipv4", 00:20:17.479 "trsvcid": "$NVMF_PORT", 00:20:17.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.479 "hdgst": ${hdgst:-false}, 00:20:17.479 "ddgst": ${ddgst:-false} 00:20:17.479 }, 00:20:17.479 "method": "bdev_nvme_attach_controller" 00:20:17.479 } 00:20:17.479 EOF 00:20:17.479 )") 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:17.479 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:17.479 { 00:20:17.479 "params": { 00:20:17.479 "name": "Nvme$subsystem", 00:20:17.480 "trtype": "$TEST_TRANSPORT", 00:20:17.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.480 "adrfam": "ipv4", 00:20:17.480 "trsvcid": "$NVMF_PORT", 00:20:17.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.480 "hdgst": ${hdgst:-false}, 00:20:17.480 "ddgst": ${ddgst:-false} 00:20:17.480 }, 00:20:17.480 "method": "bdev_nvme_attach_controller" 00:20:17.480 } 00:20:17.480 EOF 00:20:17.480 )") 00:20:17.480 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:17.480 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:17.480 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:17.480 { 00:20:17.480 "params": { 00:20:17.480 "name": "Nvme$subsystem", 00:20:17.480 "trtype": "$TEST_TRANSPORT", 00:20:17.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.480 "adrfam": "ipv4", 00:20:17.480 "trsvcid": "$NVMF_PORT", 00:20:17.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.480 "hdgst": ${hdgst:-false}, 00:20:17.480 "ddgst": ${ddgst:-false} 00:20:17.480 }, 00:20:17.480 "method": "bdev_nvme_attach_controller" 00:20:17.480 } 00:20:17.480 EOF 00:20:17.480 )") 00:20:17.480 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:17.480 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:17.480 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:17.480 { 00:20:17.480 "params": { 00:20:17.480 "name": "Nvme$subsystem", 00:20:17.480 "trtype": "$TEST_TRANSPORT", 00:20:17.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.480 "adrfam": "ipv4", 00:20:17.480 "trsvcid": "$NVMF_PORT", 00:20:17.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.480 "hdgst": ${hdgst:-false}, 00:20:17.480 "ddgst": ${ddgst:-false} 00:20:17.480 }, 00:20:17.480 "method": "bdev_nvme_attach_controller" 00:20:17.480 } 00:20:17.480 EOF 00:20:17.480 )") 00:20:17.480 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:17.739 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:17.739 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:17.739 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:17.739 "params": { 00:20:17.739 "name": "Nvme1", 00:20:17.739 "trtype": "tcp", 00:20:17.739 "traddr": "10.0.0.2", 00:20:17.739 "adrfam": "ipv4", 00:20:17.739 "trsvcid": "4420", 00:20:17.739 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.739 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:17.739 "hdgst": false, 00:20:17.739 "ddgst": false 00:20:17.739 }, 00:20:17.739 "method": "bdev_nvme_attach_controller" 00:20:17.739 },{ 00:20:17.739 "params": { 00:20:17.739 "name": "Nvme2", 00:20:17.739 "trtype": "tcp", 00:20:17.739 "traddr": "10.0.0.2", 00:20:17.739 "adrfam": "ipv4", 00:20:17.739 "trsvcid": "4420", 00:20:17.739 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:17.739 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:17.739 "hdgst": false, 00:20:17.739 "ddgst": false 00:20:17.739 }, 00:20:17.739 "method": "bdev_nvme_attach_controller" 00:20:17.739 },{ 00:20:17.739 "params": { 00:20:17.739 "name": "Nvme3", 00:20:17.739 "trtype": "tcp", 00:20:17.739 "traddr": "10.0.0.2", 00:20:17.739 "adrfam": "ipv4", 00:20:17.739 "trsvcid": "4420", 00:20:17.739 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:17.739 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:17.739 "hdgst": false, 00:20:17.739 "ddgst": false 00:20:17.739 }, 00:20:17.739 "method": "bdev_nvme_attach_controller" 00:20:17.739 },{ 00:20:17.739 "params": { 00:20:17.739 "name": "Nvme4", 00:20:17.739 "trtype": "tcp", 00:20:17.739 "traddr": "10.0.0.2", 00:20:17.739 "adrfam": "ipv4", 00:20:17.739 "trsvcid": "4420", 00:20:17.739 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:17.739 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:17.739 "hdgst": false, 00:20:17.739 "ddgst": false 00:20:17.739 }, 00:20:17.739 "method": "bdev_nvme_attach_controller" 00:20:17.739 },{ 00:20:17.739 "params": { 00:20:17.739 "name": "Nvme5", 00:20:17.739 "trtype": "tcp", 00:20:17.739 "traddr": "10.0.0.2", 00:20:17.739 "adrfam": "ipv4", 00:20:17.739 "trsvcid": "4420", 00:20:17.739 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:17.739 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:17.739 "hdgst": false, 00:20:17.739 "ddgst": false 00:20:17.739 }, 00:20:17.739 "method": "bdev_nvme_attach_controller" 00:20:17.739 },{ 00:20:17.739 "params": { 00:20:17.739 "name": "Nvme6", 00:20:17.739 "trtype": "tcp", 00:20:17.739 "traddr": "10.0.0.2", 00:20:17.739 "adrfam": "ipv4", 00:20:17.739 "trsvcid": "4420", 00:20:17.739 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:17.739 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:17.739 "hdgst": false, 00:20:17.739 "ddgst": false 00:20:17.739 }, 00:20:17.739 "method": "bdev_nvme_attach_controller" 00:20:17.739 },{ 00:20:17.739 "params": { 00:20:17.739 "name": "Nvme7", 00:20:17.739 "trtype": "tcp", 00:20:17.739 "traddr": "10.0.0.2", 00:20:17.739 "adrfam": "ipv4", 00:20:17.739 "trsvcid": "4420", 00:20:17.739 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:17.739 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:17.739 "hdgst": false, 00:20:17.739 "ddgst": false 00:20:17.739 }, 00:20:17.739 "method": "bdev_nvme_attach_controller" 00:20:17.739 },{ 00:20:17.739 "params": { 00:20:17.740 "name": "Nvme8", 00:20:17.740 "trtype": "tcp", 00:20:17.740 "traddr": "10.0.0.2", 00:20:17.740 "adrfam": "ipv4", 00:20:17.740 "trsvcid": "4420", 00:20:17.740 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:17.740 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:17.740 "hdgst": false, 00:20:17.740 "ddgst": false 00:20:17.740 }, 00:20:17.740 "method": "bdev_nvme_attach_controller" 00:20:17.740 },{ 00:20:17.740 "params": { 00:20:17.740 "name": "Nvme9", 00:20:17.740 "trtype": "tcp", 00:20:17.740 "traddr": "10.0.0.2", 00:20:17.740 "adrfam": "ipv4", 00:20:17.740 "trsvcid": "4420", 00:20:17.740 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:17.740 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:17.740 "hdgst": false, 00:20:17.740 "ddgst": false 00:20:17.740 }, 00:20:17.740 "method": "bdev_nvme_attach_controller" 00:20:17.740 },{ 00:20:17.740 "params": { 00:20:17.740 "name": "Nvme10", 00:20:17.740 "trtype": "tcp", 00:20:17.740 "traddr": "10.0.0.2", 00:20:17.740 "adrfam": "ipv4", 00:20:17.740 "trsvcid": "4420", 00:20:17.740 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:17.740 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:17.740 "hdgst": false, 00:20:17.740 "ddgst": false 00:20:17.740 }, 00:20:17.740 "method": "bdev_nvme_attach_controller" 00:20:17.740 }' 00:20:17.740 [2024-12-09 18:09:40.531488] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:20:17.740 [2024-12-09 18:09:40.531591] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:17.740 [2024-12-09 18:09:40.604405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.740 [2024-12-09 18:09:40.664718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.644 18:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.645 18:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:19.645 18:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:19.645 18:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.645 18:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:19.645 18:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.645 18:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1514947 00:20:19.645 18:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:19.645 18:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:20.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1514947 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1514769 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.580 { 00:20:20.580 "params": { 00:20:20.580 "name": "Nvme$subsystem", 00:20:20.580 "trtype": "$TEST_TRANSPORT", 00:20:20.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.580 "adrfam": "ipv4", 00:20:20.580 "trsvcid": "$NVMF_PORT", 00:20:20.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.580 "hdgst": ${hdgst:-false}, 00:20:20.580 "ddgst": ${ddgst:-false} 00:20:20.580 }, 00:20:20.580 "method": "bdev_nvme_attach_controller" 00:20:20.580 } 00:20:20.580 EOF 00:20:20.580 )") 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.580 { 00:20:20.580 "params": { 00:20:20.580 "name": "Nvme$subsystem", 00:20:20.580 "trtype": "$TEST_TRANSPORT", 00:20:20.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.580 "adrfam": "ipv4", 00:20:20.580 "trsvcid": "$NVMF_PORT", 00:20:20.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.580 "hdgst": ${hdgst:-false}, 00:20:20.580 "ddgst": ${ddgst:-false} 00:20:20.580 }, 00:20:20.580 "method": "bdev_nvme_attach_controller" 00:20:20.580 } 00:20:20.580 EOF 00:20:20.580 )") 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.580 { 00:20:20.580 "params": { 00:20:20.580 "name": "Nvme$subsystem", 00:20:20.580 "trtype": "$TEST_TRANSPORT", 00:20:20.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.580 "adrfam": "ipv4", 00:20:20.580 "trsvcid": "$NVMF_PORT", 00:20:20.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.580 "hdgst": ${hdgst:-false}, 00:20:20.580 "ddgst": ${ddgst:-false} 00:20:20.580 }, 00:20:20.580 "method": "bdev_nvme_attach_controller" 00:20:20.580 } 00:20:20.580 EOF 00:20:20.580 )") 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.580 { 00:20:20.580 "params": { 00:20:20.580 "name": "Nvme$subsystem", 00:20:20.580 "trtype": "$TEST_TRANSPORT", 00:20:20.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.580 "adrfam": "ipv4", 00:20:20.580 "trsvcid": "$NVMF_PORT", 00:20:20.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.580 "hdgst": ${hdgst:-false}, 00:20:20.580 "ddgst": ${ddgst:-false} 00:20:20.580 }, 00:20:20.580 "method": "bdev_nvme_attach_controller" 00:20:20.580 } 00:20:20.580 EOF 00:20:20.580 )") 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.580 { 00:20:20.580 "params": { 00:20:20.580 "name": "Nvme$subsystem", 00:20:20.580 "trtype": "$TEST_TRANSPORT", 00:20:20.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.580 "adrfam": "ipv4", 00:20:20.580 "trsvcid": "$NVMF_PORT", 00:20:20.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.580 "hdgst": ${hdgst:-false}, 00:20:20.580 "ddgst": ${ddgst:-false} 00:20:20.580 }, 00:20:20.580 "method": "bdev_nvme_attach_controller" 00:20:20.580 } 00:20:20.580 EOF 00:20:20.580 )") 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.580 { 00:20:20.580 "params": { 00:20:20.580 "name": "Nvme$subsystem", 00:20:20.580 "trtype": "$TEST_TRANSPORT", 00:20:20.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.580 "adrfam": "ipv4", 00:20:20.580 "trsvcid": "$NVMF_PORT", 00:20:20.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.580 "hdgst": ${hdgst:-false}, 00:20:20.580 "ddgst": ${ddgst:-false} 00:20:20.580 }, 00:20:20.580 "method": "bdev_nvme_attach_controller" 00:20:20.580 } 00:20:20.580 EOF 00:20:20.580 )") 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.580 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.581 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.581 { 00:20:20.581 "params": { 00:20:20.581 "name": "Nvme$subsystem", 00:20:20.581 "trtype": "$TEST_TRANSPORT", 00:20:20.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.581 "adrfam": "ipv4", 00:20:20.581 "trsvcid": "$NVMF_PORT", 00:20:20.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.581 "hdgst": ${hdgst:-false}, 00:20:20.581 "ddgst": ${ddgst:-false} 00:20:20.581 }, 00:20:20.581 "method": "bdev_nvme_attach_controller" 00:20:20.581 } 00:20:20.581 EOF 00:20:20.581 )") 00:20:20.581 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.581 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.581 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.581 { 00:20:20.581 "params": { 00:20:20.581 "name": "Nvme$subsystem", 00:20:20.581 "trtype": "$TEST_TRANSPORT", 00:20:20.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.581 "adrfam": "ipv4", 00:20:20.581 "trsvcid": "$NVMF_PORT", 00:20:20.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.581 "hdgst": ${hdgst:-false}, 00:20:20.581 "ddgst": ${ddgst:-false} 00:20:20.581 }, 00:20:20.581 "method": "bdev_nvme_attach_controller" 00:20:20.581 } 00:20:20.581 EOF 00:20:20.581 )") 00:20:20.581 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.581 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.581 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.581 { 00:20:20.581 "params": { 00:20:20.581 "name": "Nvme$subsystem", 00:20:20.581 "trtype": "$TEST_TRANSPORT", 00:20:20.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.581 "adrfam": "ipv4", 00:20:20.581 "trsvcid": "$NVMF_PORT", 00:20:20.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.581 "hdgst": ${hdgst:-false}, 00:20:20.581 "ddgst": ${ddgst:-false} 00:20:20.581 }, 00:20:20.581 "method": "bdev_nvme_attach_controller" 00:20:20.581 } 00:20:20.581 EOF 00:20:20.581 )") 00:20:20.581 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.581 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.581 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.581 { 00:20:20.581 "params": { 00:20:20.581 "name": "Nvme$subsystem", 00:20:20.581 "trtype": "$TEST_TRANSPORT", 00:20:20.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.581 "adrfam": "ipv4", 00:20:20.581 "trsvcid": "$NVMF_PORT", 00:20:20.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.581 "hdgst": ${hdgst:-false}, 00:20:20.581 "ddgst": ${ddgst:-false} 00:20:20.581 }, 00:20:20.581 "method": "bdev_nvme_attach_controller" 00:20:20.581 } 00:20:20.581 EOF 00:20:20.581 )") 00:20:20.581 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.581 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:20.581 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:20.581 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:20.581 "params": { 00:20:20.581 "name": "Nvme1", 00:20:20.581 "trtype": "tcp", 00:20:20.581 "traddr": "10.0.0.2", 00:20:20.581 "adrfam": "ipv4", 00:20:20.581 "trsvcid": "4420", 00:20:20.581 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.581 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.581 "hdgst": false, 00:20:20.581 "ddgst": false 00:20:20.581 }, 00:20:20.581 "method": "bdev_nvme_attach_controller" 00:20:20.581 },{ 00:20:20.581 "params": { 00:20:20.581 "name": "Nvme2", 00:20:20.581 "trtype": "tcp", 00:20:20.581 "traddr": "10.0.0.2", 00:20:20.581 "adrfam": "ipv4", 00:20:20.581 "trsvcid": "4420", 00:20:20.581 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:20.581 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:20.581 "hdgst": false, 00:20:20.581 "ddgst": false 00:20:20.581 }, 00:20:20.581 "method": "bdev_nvme_attach_controller" 00:20:20.581 },{ 00:20:20.581 "params": { 00:20:20.581 "name": "Nvme3", 00:20:20.581 "trtype": "tcp", 00:20:20.581 "traddr": "10.0.0.2", 00:20:20.581 "adrfam": "ipv4", 00:20:20.581 "trsvcid": "4420", 00:20:20.581 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:20.581 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:20.581 "hdgst": false, 00:20:20.581 "ddgst": false 00:20:20.581 }, 00:20:20.581 "method": "bdev_nvme_attach_controller" 00:20:20.581 },{ 00:20:20.581 "params": { 00:20:20.581 "name": "Nvme4", 00:20:20.581 "trtype": "tcp", 00:20:20.581 "traddr": "10.0.0.2", 00:20:20.581 "adrfam": "ipv4", 00:20:20.581 "trsvcid": "4420", 00:20:20.581 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:20.581 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:20.581 "hdgst": false, 00:20:20.581 "ddgst": false 00:20:20.581 }, 00:20:20.581 "method": "bdev_nvme_attach_controller" 00:20:20.581 },{ 00:20:20.581 "params": { 00:20:20.581 "name": "Nvme5", 00:20:20.581 "trtype": "tcp", 00:20:20.581 "traddr": "10.0.0.2", 00:20:20.581 "adrfam": "ipv4", 00:20:20.581 "trsvcid": "4420", 00:20:20.581 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:20.581 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:20.581 "hdgst": false, 00:20:20.581 "ddgst": false 00:20:20.581 }, 00:20:20.581 "method": "bdev_nvme_attach_controller" 00:20:20.581 },{ 00:20:20.581 "params": { 00:20:20.581 "name": "Nvme6", 00:20:20.581 "trtype": "tcp", 00:20:20.581 "traddr": "10.0.0.2", 00:20:20.581 "adrfam": "ipv4", 00:20:20.581 "trsvcid": "4420", 00:20:20.581 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:20.581 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:20.581 "hdgst": false, 00:20:20.581 "ddgst": false 00:20:20.581 }, 00:20:20.581 "method": "bdev_nvme_attach_controller" 00:20:20.581 },{ 00:20:20.581 "params": { 00:20:20.581 "name": "Nvme7", 00:20:20.581 "trtype": "tcp", 00:20:20.581 "traddr": "10.0.0.2", 00:20:20.581 "adrfam": "ipv4", 00:20:20.581 "trsvcid": "4420", 00:20:20.581 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:20.581 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:20.581 "hdgst": false, 00:20:20.581 "ddgst": false 00:20:20.581 }, 00:20:20.581 "method": "bdev_nvme_attach_controller" 00:20:20.581 },{ 00:20:20.581 "params": { 00:20:20.581 "name": "Nvme8", 00:20:20.581 "trtype": "tcp", 00:20:20.581 "traddr": "10.0.0.2", 00:20:20.581 "adrfam": "ipv4", 00:20:20.581 "trsvcid": "4420", 00:20:20.581 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:20.581 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:20.581 "hdgst": false, 00:20:20.581 "ddgst": false 00:20:20.581 }, 00:20:20.581 "method": "bdev_nvme_attach_controller" 00:20:20.581 },{ 00:20:20.581 "params": { 00:20:20.581 "name": "Nvme9", 00:20:20.581 "trtype": "tcp", 00:20:20.581 "traddr": "10.0.0.2", 00:20:20.581 "adrfam": "ipv4", 00:20:20.581 "trsvcid": "4420", 00:20:20.581 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:20.581 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:20.581 "hdgst": false, 00:20:20.581 "ddgst": false 00:20:20.581 }, 00:20:20.581 "method": "bdev_nvme_attach_controller" 00:20:20.581 },{ 00:20:20.581 "params": { 00:20:20.581 "name": "Nvme10", 00:20:20.581 "trtype": "tcp", 00:20:20.581 "traddr": "10.0.0.2", 00:20:20.581 "adrfam": "ipv4", 00:20:20.581 "trsvcid": "4420", 00:20:20.581 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:20.581 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:20.581 "hdgst": false, 00:20:20.581 "ddgst": false 00:20:20.581 }, 00:20:20.581 "method": "bdev_nvme_attach_controller" 00:20:20.581 }' 00:20:20.581 [2024-12-09 18:09:43.608243] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:20:20.581 [2024-12-09 18:09:43.608328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1515254 ] 00:20:20.841 [2024-12-09 18:09:43.684728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.841 [2024-12-09 18:09:43.745155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.217 Running I/O for 1 seconds... 00:20:23.592 1809.00 IOPS, 113.06 MiB/s 00:20:23.592 Latency(us) 00:20:23.592 [2024-12-09T17:09:46.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.592 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.592 Verification LBA range: start 0x0 length 0x400 00:20:23.592 Nvme1n1 : 1.09 234.85 14.68 0.00 0.00 269327.74 23398.78 236123.78 00:20:23.592 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.592 Verification LBA range: start 0x0 length 0x400 00:20:23.592 Nvme2n1 : 1.12 233.43 14.59 0.00 0.00 262283.68 12087.75 242337.56 00:20:23.592 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.592 Verification LBA range: start 0x0 length 0x400 00:20:23.592 Nvme3n1 : 1.08 241.32 15.08 0.00 0.00 247421.55 19223.89 245444.46 00:20:23.592 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.592 Verification LBA range: start 0x0 length 0x400 00:20:23.592 Nvme4n1 : 1.08 237.06 14.82 0.00 0.00 253604.03 20486.07 248551.35 00:20:23.592 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.592 Verification LBA range: start 0x0 length 0x400 00:20:23.592 Nvme5n1 : 1.14 224.67 14.04 0.00 0.00 262581.85 21262.79 257872.02 00:20:23.592 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.592 Verification LBA range: start 0x0 length 0x400 00:20:23.592 Nvme6n1 : 1.13 229.09 14.32 0.00 0.00 253089.76 6456.51 260978.92 00:20:23.592 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.592 Verification LBA range: start 0x0 length 0x400 00:20:23.592 Nvme7n1 : 1.19 268.54 16.78 0.00 0.00 213439.91 11116.85 253211.69 00:20:23.592 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.592 Verification LBA range: start 0x0 length 0x400 00:20:23.592 Nvme8n1 : 1.13 226.05 14.13 0.00 0.00 248726.76 20777.34 250104.79 00:20:23.592 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.592 Verification LBA range: start 0x0 length 0x400 00:20:23.592 Nvme9n1 : 1.21 264.07 16.50 0.00 0.00 211301.49 11019.76 268746.15 00:20:23.592 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.592 Verification LBA range: start 0x0 length 0x400 00:20:23.592 Nvme10n1 : 1.21 264.77 16.55 0.00 0.00 207137.07 8349.77 281173.71 00:20:23.592 [2024-12-09T17:09:46.633Z] =================================================================================================================== 00:20:23.592 [2024-12-09T17:09:46.633Z] Total : 2423.84 151.49 0.00 0.00 240707.28 6456.51 281173.71 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:23.853 rmmod nvme_tcp 00:20:23.853 rmmod nvme_fabrics 00:20:23.853 rmmod nvme_keyring 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1514769 ']' 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1514769 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1514769 ']' 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1514769 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1514769 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1514769' 00:20:23.853 killing process with pid 1514769 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1514769 00:20:23.853 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1514769 00:20:24.423 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:24.423 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:24.423 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:24.423 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:24.423 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:24.423 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:24.423 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:24.424 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:24.424 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:24.424 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.424 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:24.424 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:26.326 00:20:26.326 real 0m11.980s 00:20:26.326 user 0m34.962s 00:20:26.326 sys 0m3.217s 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:26.326 ************************************ 00:20:26.326 END TEST nvmf_shutdown_tc1 00:20:26.326 ************************************ 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:26.326 ************************************ 00:20:26.326 START TEST nvmf_shutdown_tc2 00:20:26.326 ************************************ 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.326 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:26.327 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:26.327 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:26.327 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:26.327 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.585 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:26.585 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.585 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:26.586 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:26.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:20:26.586 00:20:26.586 --- 10.0.0.2 ping statistics --- 00:20:26.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.586 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:20:26.586 00:20:26.586 --- 10.0.0.1 ping statistics --- 00:20:26.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.586 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1516138 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1516138 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1516138 ']' 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.586 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:26.586 [2024-12-09 18:09:49.564101] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:20:26.586 [2024-12-09 18:09:49.564180] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.844 [2024-12-09 18:09:49.637798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.844 [2024-12-09 18:09:49.695297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.844 [2024-12-09 18:09:49.695348] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.844 [2024-12-09 18:09:49.695378] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.844 [2024-12-09 18:09:49.695389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.845 [2024-12-09 18:09:49.695400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.845 [2024-12-09 18:09:49.696876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.845 [2024-12-09 18:09:49.696943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.845 [2024-12-09 18:09:49.697007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:26.845 [2024-12-09 18:09:49.697011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:26.845 [2024-12-09 18:09:49.849212] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.845 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:27.105 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:27.105 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:27.105 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:27.105 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:27.105 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:27.105 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:27.105 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:27.105 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.105 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:27.105 Malloc1 00:20:27.105 [2024-12-09 18:09:49.952237] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.105 Malloc2 00:20:27.105 Malloc3 00:20:27.105 Malloc4 00:20:27.105 Malloc5 00:20:27.365 Malloc6 00:20:27.365 Malloc7 00:20:27.365 Malloc8 00:20:27.365 Malloc9 00:20:27.365 Malloc10 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1516215 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1516215 /var/tmp/bdevperf.sock 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1516215 ']' 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:27.624 { 00:20:27.624 "params": { 00:20:27.624 "name": "Nvme$subsystem", 00:20:27.624 "trtype": "$TEST_TRANSPORT", 00:20:27.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.624 "adrfam": "ipv4", 00:20:27.624 "trsvcid": "$NVMF_PORT", 00:20:27.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.624 "hdgst": ${hdgst:-false}, 00:20:27.624 "ddgst": ${ddgst:-false} 00:20:27.624 }, 00:20:27.624 "method": "bdev_nvme_attach_controller" 00:20:27.624 } 00:20:27.624 EOF 00:20:27.624 )") 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:27.624 { 00:20:27.624 "params": { 00:20:27.624 "name": "Nvme$subsystem", 00:20:27.624 "trtype": "$TEST_TRANSPORT", 00:20:27.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.624 "adrfam": "ipv4", 00:20:27.624 "trsvcid": "$NVMF_PORT", 00:20:27.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.624 "hdgst": ${hdgst:-false}, 00:20:27.624 "ddgst": ${ddgst:-false} 00:20:27.624 }, 00:20:27.624 "method": "bdev_nvme_attach_controller" 00:20:27.624 } 00:20:27.624 EOF 00:20:27.624 )") 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:27.624 { 00:20:27.624 "params": { 00:20:27.624 "name": "Nvme$subsystem", 00:20:27.624 "trtype": "$TEST_TRANSPORT", 00:20:27.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.624 "adrfam": "ipv4", 00:20:27.624 "trsvcid": "$NVMF_PORT", 00:20:27.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.624 "hdgst": ${hdgst:-false}, 00:20:27.624 "ddgst": ${ddgst:-false} 00:20:27.624 }, 00:20:27.624 "method": "bdev_nvme_attach_controller" 00:20:27.624 } 00:20:27.624 EOF 00:20:27.624 )") 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:27.624 { 00:20:27.624 "params": { 00:20:27.624 "name": "Nvme$subsystem", 00:20:27.624 "trtype": "$TEST_TRANSPORT", 00:20:27.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.624 "adrfam": "ipv4", 00:20:27.624 "trsvcid": "$NVMF_PORT", 00:20:27.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.624 "hdgst": ${hdgst:-false}, 00:20:27.624 "ddgst": ${ddgst:-false} 00:20:27.624 }, 00:20:27.624 "method": "bdev_nvme_attach_controller" 00:20:27.624 } 00:20:27.624 EOF 00:20:27.624 )") 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:27.624 { 00:20:27.624 "params": { 00:20:27.624 "name": "Nvme$subsystem", 00:20:27.624 "trtype": "$TEST_TRANSPORT", 00:20:27.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.624 "adrfam": "ipv4", 00:20:27.624 "trsvcid": "$NVMF_PORT", 00:20:27.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.624 "hdgst": ${hdgst:-false}, 00:20:27.624 "ddgst": ${ddgst:-false} 00:20:27.624 }, 00:20:27.624 "method": "bdev_nvme_attach_controller" 00:20:27.624 } 00:20:27.624 EOF 00:20:27.624 )") 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:27.624 { 00:20:27.624 "params": { 00:20:27.624 "name": "Nvme$subsystem", 00:20:27.624 "trtype": "$TEST_TRANSPORT", 00:20:27.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.624 "adrfam": "ipv4", 00:20:27.624 "trsvcid": "$NVMF_PORT", 00:20:27.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.624 "hdgst": ${hdgst:-false}, 00:20:27.624 "ddgst": ${ddgst:-false} 00:20:27.624 }, 00:20:27.624 "method": "bdev_nvme_attach_controller" 00:20:27.624 } 00:20:27.624 EOF 00:20:27.624 )") 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:27.624 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:27.624 { 00:20:27.624 "params": { 00:20:27.624 "name": "Nvme$subsystem", 00:20:27.624 "trtype": "$TEST_TRANSPORT", 00:20:27.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.624 "adrfam": "ipv4", 00:20:27.624 "trsvcid": "$NVMF_PORT", 00:20:27.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.624 "hdgst": ${hdgst:-false}, 00:20:27.625 "ddgst": ${ddgst:-false} 00:20:27.625 }, 00:20:27.625 "method": "bdev_nvme_attach_controller" 00:20:27.625 } 00:20:27.625 EOF 00:20:27.625 )") 00:20:27.625 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:27.625 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:27.625 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:27.625 { 00:20:27.625 "params": { 00:20:27.625 "name": "Nvme$subsystem", 00:20:27.625 "trtype": "$TEST_TRANSPORT", 00:20:27.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.625 "adrfam": "ipv4", 00:20:27.625 "trsvcid": "$NVMF_PORT", 00:20:27.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.625 "hdgst": ${hdgst:-false}, 00:20:27.625 "ddgst": ${ddgst:-false} 00:20:27.625 }, 00:20:27.625 "method": "bdev_nvme_attach_controller" 00:20:27.625 } 00:20:27.625 EOF 00:20:27.625 )") 00:20:27.625 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:27.625 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:27.625 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:27.625 { 00:20:27.625 "params": { 00:20:27.625 "name": "Nvme$subsystem", 00:20:27.625 "trtype": "$TEST_TRANSPORT", 00:20:27.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.625 "adrfam": "ipv4", 00:20:27.625 "trsvcid": "$NVMF_PORT", 00:20:27.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.625 "hdgst": ${hdgst:-false}, 00:20:27.625 "ddgst": ${ddgst:-false} 00:20:27.625 }, 00:20:27.625 "method": "bdev_nvme_attach_controller" 00:20:27.625 } 00:20:27.625 EOF 00:20:27.625 )") 00:20:27.625 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:27.625 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:27.625 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:27.625 { 00:20:27.625 "params": { 00:20:27.625 "name": "Nvme$subsystem", 00:20:27.625 "trtype": "$TEST_TRANSPORT", 00:20:27.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.625 "adrfam": "ipv4", 00:20:27.625 "trsvcid": "$NVMF_PORT", 00:20:27.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.625 "hdgst": ${hdgst:-false}, 00:20:27.625 "ddgst": ${ddgst:-false} 00:20:27.625 }, 00:20:27.625 "method": "bdev_nvme_attach_controller" 00:20:27.625 } 00:20:27.625 EOF 00:20:27.625 )") 00:20:27.625 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:27.625 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:27.625 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:27.625 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:27.625 "params": { 00:20:27.625 "name": "Nvme1", 00:20:27.625 "trtype": "tcp", 00:20:27.625 "traddr": "10.0.0.2", 00:20:27.625 "adrfam": "ipv4", 00:20:27.625 "trsvcid": "4420", 00:20:27.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:27.625 "hdgst": false, 00:20:27.625 "ddgst": false 00:20:27.625 }, 00:20:27.625 "method": "bdev_nvme_attach_controller" 00:20:27.625 },{ 00:20:27.625 "params": { 00:20:27.625 "name": "Nvme2", 00:20:27.625 "trtype": "tcp", 00:20:27.625 "traddr": "10.0.0.2", 00:20:27.625 "adrfam": "ipv4", 00:20:27.625 "trsvcid": "4420", 00:20:27.625 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:27.625 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:27.625 "hdgst": false, 00:20:27.625 "ddgst": false 00:20:27.625 }, 00:20:27.625 "method": "bdev_nvme_attach_controller" 00:20:27.625 },{ 00:20:27.625 "params": { 00:20:27.625 "name": "Nvme3", 00:20:27.625 "trtype": "tcp", 00:20:27.625 "traddr": "10.0.0.2", 00:20:27.625 "adrfam": "ipv4", 00:20:27.625 "trsvcid": "4420", 00:20:27.625 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:27.625 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:27.625 "hdgst": false, 00:20:27.625 "ddgst": false 00:20:27.625 }, 00:20:27.625 "method": "bdev_nvme_attach_controller" 00:20:27.625 },{ 00:20:27.625 "params": { 00:20:27.625 "name": "Nvme4", 00:20:27.625 "trtype": "tcp", 00:20:27.625 "traddr": "10.0.0.2", 00:20:27.625 "adrfam": "ipv4", 00:20:27.625 "trsvcid": "4420", 00:20:27.625 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:27.625 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:27.625 "hdgst": false, 00:20:27.625 "ddgst": false 00:20:27.625 }, 00:20:27.625 "method": "bdev_nvme_attach_controller" 00:20:27.625 },{ 00:20:27.625 "params": { 00:20:27.625 "name": "Nvme5", 00:20:27.625 "trtype": "tcp", 00:20:27.625 "traddr": "10.0.0.2", 00:20:27.625 "adrfam": "ipv4", 00:20:27.625 "trsvcid": "4420", 00:20:27.625 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:27.625 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:27.625 "hdgst": false, 00:20:27.625 "ddgst": false 00:20:27.625 }, 00:20:27.625 "method": "bdev_nvme_attach_controller" 00:20:27.625 },{ 00:20:27.625 "params": { 00:20:27.625 "name": "Nvme6", 00:20:27.625 "trtype": "tcp", 00:20:27.625 "traddr": "10.0.0.2", 00:20:27.625 "adrfam": "ipv4", 00:20:27.625 "trsvcid": "4420", 00:20:27.625 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:27.625 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:27.625 "hdgst": false, 00:20:27.625 "ddgst": false 00:20:27.625 }, 00:20:27.625 "method": "bdev_nvme_attach_controller" 00:20:27.625 },{ 00:20:27.625 "params": { 00:20:27.625 "name": "Nvme7", 00:20:27.625 "trtype": "tcp", 00:20:27.625 "traddr": "10.0.0.2", 00:20:27.625 "adrfam": "ipv4", 00:20:27.625 "trsvcid": "4420", 00:20:27.625 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:27.625 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:27.625 "hdgst": false, 00:20:27.625 "ddgst": false 00:20:27.625 }, 00:20:27.625 "method": "bdev_nvme_attach_controller" 00:20:27.625 },{ 00:20:27.625 "params": { 00:20:27.625 "name": "Nvme8", 00:20:27.625 "trtype": "tcp", 00:20:27.625 "traddr": "10.0.0.2", 00:20:27.625 "adrfam": "ipv4", 00:20:27.625 "trsvcid": "4420", 00:20:27.625 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:27.625 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:27.625 "hdgst": false, 00:20:27.625 "ddgst": false 00:20:27.625 }, 00:20:27.625 "method": "bdev_nvme_attach_controller" 00:20:27.625 },{ 00:20:27.625 "params": { 00:20:27.625 "name": "Nvme9", 00:20:27.625 "trtype": "tcp", 00:20:27.625 "traddr": "10.0.0.2", 00:20:27.625 "adrfam": "ipv4", 00:20:27.625 "trsvcid": "4420", 00:20:27.625 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:27.625 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:27.625 "hdgst": false, 00:20:27.625 "ddgst": false 00:20:27.625 }, 00:20:27.625 "method": "bdev_nvme_attach_controller" 00:20:27.625 },{ 00:20:27.625 "params": { 00:20:27.625 "name": "Nvme10", 00:20:27.625 "trtype": "tcp", 00:20:27.625 "traddr": "10.0.0.2", 00:20:27.625 "adrfam": "ipv4", 00:20:27.625 "trsvcid": "4420", 00:20:27.625 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:27.625 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:27.625 "hdgst": false, 00:20:27.625 "ddgst": false 00:20:27.625 }, 00:20:27.625 "method": "bdev_nvme_attach_controller" 00:20:27.625 }' 00:20:27.625 [2024-12-09 18:09:50.481634] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:20:27.625 [2024-12-09 18:09:50.481718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1516215 ] 00:20:27.625 [2024-12-09 18:09:50.559151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.625 [2024-12-09 18:09:50.618821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.529 Running I/O for 10 seconds... 00:20:29.529 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.529 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:29.529 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:29.529 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.529 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:29.529 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.529 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:29.529 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:29.529 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:29.529 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:29.529 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:29.529 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:29.529 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:29.529 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:29.529 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.529 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:29.529 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:29.529 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.788 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:29.788 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:29.788 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1516215 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1516215 ']' 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1516215 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1516215 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1516215' 00:20:30.046 killing process with pid 1516215 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1516215 00:20:30.046 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1516215 00:20:30.046 Received shutdown signal, test time was about 0.878769 seconds 00:20:30.046 00:20:30.046 Latency(us) 00:20:30.046 [2024-12-09T17:09:53.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.046 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.046 Verification LBA range: start 0x0 length 0x400 00:20:30.046 Nvme1n1 : 0.84 227.42 14.21 0.00 0.00 277459.50 33399.09 234570.33 00:20:30.046 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.046 Verification LBA range: start 0x0 length 0x400 00:20:30.046 Nvme2n1 : 0.86 246.06 15.38 0.00 0.00 246098.78 16408.27 251658.24 00:20:30.047 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.047 Verification LBA range: start 0x0 length 0x400 00:20:30.047 Nvme3n1 : 0.88 291.58 18.22 0.00 0.00 207569.54 20680.25 240784.12 00:20:30.047 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.047 Verification LBA range: start 0x0 length 0x400 00:20:30.047 Nvme4n1 : 0.83 230.93 14.43 0.00 0.00 255188.89 23690.05 253211.69 00:20:30.047 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.047 Verification LBA range: start 0x0 length 0x400 00:20:30.047 Nvme5n1 : 0.85 225.32 14.08 0.00 0.00 256052.84 23884.23 254765.13 00:20:30.047 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.047 Verification LBA range: start 0x0 length 0x400 00:20:30.047 Nvme6n1 : 0.87 221.79 13.86 0.00 0.00 254629.61 19806.44 256318.58 00:20:30.047 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.047 Verification LBA range: start 0x0 length 0x400 00:20:30.047 Nvme7n1 : 0.86 223.08 13.94 0.00 0.00 246898.03 20291.89 240784.12 00:20:30.047 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.047 Verification LBA range: start 0x0 length 0x400 00:20:30.047 Nvme8n1 : 0.85 226.57 14.16 0.00 0.00 236451.71 17476.27 251658.24 00:20:30.047 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.047 Verification LBA range: start 0x0 length 0x400 00:20:30.047 Nvme9n1 : 0.87 220.54 13.78 0.00 0.00 238284.55 23981.32 270299.59 00:20:30.047 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.047 Verification LBA range: start 0x0 length 0x400 00:20:30.047 Nvme10n1 : 0.87 219.65 13.73 0.00 0.00 233483.12 20874.43 281173.71 00:20:30.047 [2024-12-09T17:09:53.088Z] =================================================================================================================== 00:20:30.047 [2024-12-09T17:09:53.088Z] Total : 2332.94 145.81 0.00 0.00 244017.33 16408.27 281173.71 00:20:30.305 18:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:31.248 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1516138 00:20:31.248 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:31.248 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:31.248 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:31.248 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:31.248 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:31.248 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:31.248 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:31.248 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:31.248 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:31.248 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:31.248 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:31.248 rmmod nvme_tcp 00:20:31.248 rmmod nvme_fabrics 00:20:31.248 rmmod nvme_keyring 00:20:31.248 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:31.249 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:31.249 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:31.249 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1516138 ']' 00:20:31.249 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1516138 00:20:31.249 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1516138 ']' 00:20:31.249 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1516138 00:20:31.249 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:31.249 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.249 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1516138 00:20:31.507 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:31.507 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:31.507 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1516138' 00:20:31.507 killing process with pid 1516138 00:20:31.507 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1516138 00:20:31.507 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1516138 00:20:31.765 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:31.765 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:31.765 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:31.765 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:31.765 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:31.765 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:31.765 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:31.765 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:31.765 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:31.765 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.765 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:31.765 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.303 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:34.303 00:20:34.304 real 0m7.471s 00:20:34.304 user 0m22.586s 00:20:34.304 sys 0m1.439s 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:34.304 ************************************ 00:20:34.304 END TEST nvmf_shutdown_tc2 00:20:34.304 ************************************ 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:34.304 ************************************ 00:20:34.304 START TEST nvmf_shutdown_tc3 00:20:34.304 ************************************ 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:34.304 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:34.304 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:34.304 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:34.304 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:34.304 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:34.305 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.305 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.305 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:34.305 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:34.305 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:34.305 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:34.305 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:34.305 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:34.305 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:34.305 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.305 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:34.305 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:34.305 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:34.305 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:34.305 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:34.305 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:34.305 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:34.305 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:34.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:20:34.305 00:20:34.305 --- 10.0.0.2 ping statistics --- 00:20:34.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.305 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:34.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:20:34.305 00:20:34.305 --- 10.0.0.1 ping statistics --- 00:20:34.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.305 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1517113 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1517113 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1517113 ']' 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.305 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:34.305 [2024-12-09 18:09:57.103926] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:20:34.305 [2024-12-09 18:09:57.104015] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.305 [2024-12-09 18:09:57.175307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:34.305 [2024-12-09 18:09:57.229885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.305 [2024-12-09 18:09:57.229946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.305 [2024-12-09 18:09:57.229973] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.305 [2024-12-09 18:09:57.229991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.305 [2024-12-09 18:09:57.230002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.305 [2024-12-09 18:09:57.231426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.305 [2024-12-09 18:09:57.231489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:34.305 [2024-12-09 18:09:57.231567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:34.305 [2024-12-09 18:09:57.231571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.565 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.565 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:34.565 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:34.565 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:34.565 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:34.566 [2024-12-09 18:09:57.382938] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.566 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:34.566 Malloc1 00:20:34.566 [2024-12-09 18:09:57.481895] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.566 Malloc2 00:20:34.566 Malloc3 00:20:34.827 Malloc4 00:20:34.827 Malloc5 00:20:34.827 Malloc6 00:20:34.827 Malloc7 00:20:34.827 Malloc8 00:20:34.827 Malloc9 00:20:35.090 Malloc10 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1517290 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1517290 /var/tmp/bdevperf.sock 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1517290 ']' 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:35.090 { 00:20:35.090 "params": { 00:20:35.090 "name": "Nvme$subsystem", 00:20:35.090 "trtype": "$TEST_TRANSPORT", 00:20:35.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.090 "adrfam": "ipv4", 00:20:35.090 "trsvcid": "$NVMF_PORT", 00:20:35.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.090 "hdgst": ${hdgst:-false}, 00:20:35.090 "ddgst": ${ddgst:-false} 00:20:35.090 }, 00:20:35.090 "method": "bdev_nvme_attach_controller" 00:20:35.090 } 00:20:35.090 EOF 00:20:35.090 )") 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:35.090 { 00:20:35.090 "params": { 00:20:35.090 "name": "Nvme$subsystem", 00:20:35.090 "trtype": "$TEST_TRANSPORT", 00:20:35.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.090 "adrfam": "ipv4", 00:20:35.090 "trsvcid": "$NVMF_PORT", 00:20:35.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.090 "hdgst": ${hdgst:-false}, 00:20:35.090 "ddgst": ${ddgst:-false} 00:20:35.090 }, 00:20:35.090 "method": "bdev_nvme_attach_controller" 00:20:35.090 } 00:20:35.090 EOF 00:20:35.090 )") 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:35.090 { 00:20:35.090 "params": { 00:20:35.090 "name": "Nvme$subsystem", 00:20:35.090 "trtype": "$TEST_TRANSPORT", 00:20:35.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.090 "adrfam": "ipv4", 00:20:35.090 "trsvcid": "$NVMF_PORT", 00:20:35.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.090 "hdgst": ${hdgst:-false}, 00:20:35.090 "ddgst": ${ddgst:-false} 00:20:35.090 }, 00:20:35.090 "method": "bdev_nvme_attach_controller" 00:20:35.090 } 00:20:35.090 EOF 00:20:35.090 )") 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:35.090 { 00:20:35.090 "params": { 00:20:35.090 "name": "Nvme$subsystem", 00:20:35.090 "trtype": "$TEST_TRANSPORT", 00:20:35.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.090 "adrfam": "ipv4", 00:20:35.090 "trsvcid": "$NVMF_PORT", 00:20:35.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.090 "hdgst": ${hdgst:-false}, 00:20:35.090 "ddgst": ${ddgst:-false} 00:20:35.090 }, 00:20:35.090 "method": "bdev_nvme_attach_controller" 00:20:35.090 } 00:20:35.090 EOF 00:20:35.090 )") 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:35.090 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:35.090 { 00:20:35.090 "params": { 00:20:35.090 "name": "Nvme$subsystem", 00:20:35.090 "trtype": "$TEST_TRANSPORT", 00:20:35.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.090 "adrfam": "ipv4", 00:20:35.090 "trsvcid": "$NVMF_PORT", 00:20:35.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.090 "hdgst": ${hdgst:-false}, 00:20:35.091 "ddgst": ${ddgst:-false} 00:20:35.091 }, 00:20:35.091 "method": "bdev_nvme_attach_controller" 00:20:35.091 } 00:20:35.091 EOF 00:20:35.091 )") 00:20:35.091 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:35.091 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:35.091 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:35.091 { 00:20:35.091 "params": { 00:20:35.091 "name": "Nvme$subsystem", 00:20:35.091 "trtype": "$TEST_TRANSPORT", 00:20:35.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.091 "adrfam": "ipv4", 00:20:35.091 "trsvcid": "$NVMF_PORT", 00:20:35.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.091 "hdgst": ${hdgst:-false}, 00:20:35.091 "ddgst": ${ddgst:-false} 00:20:35.091 }, 00:20:35.091 "method": "bdev_nvme_attach_controller" 00:20:35.091 } 00:20:35.091 EOF 00:20:35.091 )") 00:20:35.091 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:35.091 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:35.091 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:35.091 { 00:20:35.091 "params": { 00:20:35.091 "name": "Nvme$subsystem", 00:20:35.091 "trtype": "$TEST_TRANSPORT", 00:20:35.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.091 "adrfam": "ipv4", 00:20:35.091 "trsvcid": "$NVMF_PORT", 00:20:35.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.091 "hdgst": ${hdgst:-false}, 00:20:35.091 "ddgst": ${ddgst:-false} 00:20:35.091 }, 00:20:35.091 "method": "bdev_nvme_attach_controller" 00:20:35.091 } 00:20:35.091 EOF 00:20:35.091 )") 00:20:35.091 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:35.091 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:35.091 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:35.091 { 00:20:35.091 "params": { 00:20:35.091 "name": "Nvme$subsystem", 00:20:35.091 "trtype": "$TEST_TRANSPORT", 00:20:35.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.091 "adrfam": "ipv4", 00:20:35.091 "trsvcid": "$NVMF_PORT", 00:20:35.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.091 "hdgst": ${hdgst:-false}, 00:20:35.091 "ddgst": ${ddgst:-false} 00:20:35.091 }, 00:20:35.091 "method": "bdev_nvme_attach_controller" 00:20:35.091 } 00:20:35.091 EOF 00:20:35.091 )") 00:20:35.091 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:35.091 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:35.091 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:35.091 { 00:20:35.091 "params": { 00:20:35.091 "name": "Nvme$subsystem", 00:20:35.091 "trtype": "$TEST_TRANSPORT", 00:20:35.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.091 "adrfam": "ipv4", 00:20:35.091 "trsvcid": "$NVMF_PORT", 00:20:35.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.091 "hdgst": ${hdgst:-false}, 00:20:35.091 "ddgst": ${ddgst:-false} 00:20:35.091 }, 00:20:35.091 "method": "bdev_nvme_attach_controller" 00:20:35.091 } 00:20:35.091 EOF 00:20:35.091 )") 00:20:35.091 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:35.091 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:35.091 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:35.091 { 00:20:35.091 "params": { 00:20:35.091 "name": "Nvme$subsystem", 00:20:35.091 "trtype": "$TEST_TRANSPORT", 00:20:35.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.091 "adrfam": "ipv4", 00:20:35.091 "trsvcid": "$NVMF_PORT", 00:20:35.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.091 "hdgst": ${hdgst:-false}, 00:20:35.091 "ddgst": ${ddgst:-false} 00:20:35.091 }, 00:20:35.091 "method": "bdev_nvme_attach_controller" 00:20:35.091 } 00:20:35.091 EOF 00:20:35.091 )") 00:20:35.091 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:35.091 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:20:35.091 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:20:35.091 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:35.091 "params": { 00:20:35.091 "name": "Nvme1", 00:20:35.091 "trtype": "tcp", 00:20:35.091 "traddr": "10.0.0.2", 00:20:35.091 "adrfam": "ipv4", 00:20:35.091 "trsvcid": "4420", 00:20:35.091 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.091 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:35.091 "hdgst": false, 00:20:35.091 "ddgst": false 00:20:35.091 }, 00:20:35.091 "method": "bdev_nvme_attach_controller" 00:20:35.091 },{ 00:20:35.091 "params": { 00:20:35.091 "name": "Nvme2", 00:20:35.091 "trtype": "tcp", 00:20:35.091 "traddr": "10.0.0.2", 00:20:35.091 "adrfam": "ipv4", 00:20:35.091 "trsvcid": "4420", 00:20:35.091 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:35.091 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:35.091 "hdgst": false, 00:20:35.091 "ddgst": false 00:20:35.091 }, 00:20:35.091 "method": "bdev_nvme_attach_controller" 00:20:35.091 },{ 00:20:35.091 "params": { 00:20:35.091 "name": "Nvme3", 00:20:35.091 "trtype": "tcp", 00:20:35.091 "traddr": "10.0.0.2", 00:20:35.091 "adrfam": "ipv4", 00:20:35.091 "trsvcid": "4420", 00:20:35.091 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:35.091 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:35.091 "hdgst": false, 00:20:35.091 "ddgst": false 00:20:35.091 }, 00:20:35.091 "method": "bdev_nvme_attach_controller" 00:20:35.091 },{ 00:20:35.091 "params": { 00:20:35.091 "name": "Nvme4", 00:20:35.091 "trtype": "tcp", 00:20:35.091 "traddr": "10.0.0.2", 00:20:35.091 "adrfam": "ipv4", 00:20:35.091 "trsvcid": "4420", 00:20:35.091 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:35.091 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:35.091 "hdgst": false, 00:20:35.091 "ddgst": false 00:20:35.091 }, 00:20:35.091 "method": "bdev_nvme_attach_controller" 00:20:35.091 },{ 00:20:35.091 "params": { 00:20:35.091 "name": "Nvme5", 00:20:35.091 "trtype": "tcp", 00:20:35.091 "traddr": "10.0.0.2", 00:20:35.091 "adrfam": "ipv4", 00:20:35.091 "trsvcid": "4420", 00:20:35.091 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:35.091 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:35.091 "hdgst": false, 00:20:35.091 "ddgst": false 00:20:35.091 }, 00:20:35.091 "method": "bdev_nvme_attach_controller" 00:20:35.091 },{ 00:20:35.091 "params": { 00:20:35.091 "name": "Nvme6", 00:20:35.091 "trtype": "tcp", 00:20:35.091 "traddr": "10.0.0.2", 00:20:35.091 "adrfam": "ipv4", 00:20:35.091 "trsvcid": "4420", 00:20:35.091 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:35.091 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:35.091 "hdgst": false, 00:20:35.091 "ddgst": false 00:20:35.091 }, 00:20:35.091 "method": "bdev_nvme_attach_controller" 00:20:35.091 },{ 00:20:35.091 "params": { 00:20:35.091 "name": "Nvme7", 00:20:35.091 "trtype": "tcp", 00:20:35.091 "traddr": "10.0.0.2", 00:20:35.091 "adrfam": "ipv4", 00:20:35.091 "trsvcid": "4420", 00:20:35.091 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:35.091 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:35.091 "hdgst": false, 00:20:35.091 "ddgst": false 00:20:35.091 }, 00:20:35.091 "method": "bdev_nvme_attach_controller" 00:20:35.091 },{ 00:20:35.091 "params": { 00:20:35.091 "name": "Nvme8", 00:20:35.091 "trtype": "tcp", 00:20:35.091 "traddr": "10.0.0.2", 00:20:35.091 "adrfam": "ipv4", 00:20:35.091 "trsvcid": "4420", 00:20:35.091 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:35.091 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:35.091 "hdgst": false, 00:20:35.091 "ddgst": false 00:20:35.091 }, 00:20:35.091 "method": "bdev_nvme_attach_controller" 00:20:35.091 },{ 00:20:35.091 "params": { 00:20:35.091 "name": "Nvme9", 00:20:35.091 "trtype": "tcp", 00:20:35.091 "traddr": "10.0.0.2", 00:20:35.091 "adrfam": "ipv4", 00:20:35.091 "trsvcid": "4420", 00:20:35.091 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:35.091 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:35.091 "hdgst": false, 00:20:35.091 "ddgst": false 00:20:35.091 }, 00:20:35.091 "method": "bdev_nvme_attach_controller" 00:20:35.091 },{ 00:20:35.091 "params": { 00:20:35.091 "name": "Nvme10", 00:20:35.091 "trtype": "tcp", 00:20:35.091 "traddr": "10.0.0.2", 00:20:35.091 "adrfam": "ipv4", 00:20:35.091 "trsvcid": "4420", 00:20:35.091 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:35.091 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:35.091 "hdgst": false, 00:20:35.091 "ddgst": false 00:20:35.091 }, 00:20:35.091 "method": "bdev_nvme_attach_controller" 00:20:35.091 }' 00:20:35.091 [2024-12-09 18:09:58.002041] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:20:35.091 [2024-12-09 18:09:58.002119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1517290 ] 00:20:35.092 [2024-12-09 18:09:58.074518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.350 [2024-12-09 18:09:58.134647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.261 Running I/O for 10 seconds... 00:20:37.261 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:37.261 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:37.261 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:37.261 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.261 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:37.546 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.546 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:37.546 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:37.546 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:37.546 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:37.546 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:20:37.546 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:20:37.546 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:37.546 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:37.546 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:37.546 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:37.546 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.546 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:37.546 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.546 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:20:37.546 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:20:37.546 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:37.822 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:37.822 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:37.822 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:37.822 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:37.822 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.822 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:37.822 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.822 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:37.822 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:37.822 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:38.114 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:38.114 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:38.114 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:38.114 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:38.114 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.114 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:38.114 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.115 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:38.115 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:38.115 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:20:38.115 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:20:38.115 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:20:38.115 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1517113 00:20:38.115 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1517113 ']' 00:20:38.115 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1517113 00:20:38.115 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:20:38.115 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.115 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1517113 00:20:38.115 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:38.115 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:38.115 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1517113' 00:20:38.115 killing process with pid 1517113 00:20:38.115 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1517113 00:20:38.115 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1517113 00:20:38.115 [2024-12-09 18:10:01.085491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.085692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.085729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.085752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.085772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.085792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.085811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.085831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.085851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.085886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.085905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.085925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.085946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.085966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.085986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.086984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.087006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.087028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.087050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.087071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bcd30 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.088693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.088725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.088740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.088752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.088764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.088776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.088788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.115 [2024-12-09 18:10:01.088800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.088811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.088829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.088842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.088854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.088866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.088878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.088890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.088901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.088913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.088925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.088937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.088949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.088961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.088973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.088984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.088996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.089464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164de70 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.090811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.090834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.090847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.090859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.090871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.090884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.090896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.090908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.090919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.090931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.090943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.090956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.090968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.090979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.090991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.091003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.091015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.091026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.091056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.091079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.091103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.091118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.091130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.116 [2024-12-09 18:10:01.091142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.117 [2024-12-09 18:10:01.091155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.117 [2024-12-09 18:10:01.091187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-12-09 18:10:01.091208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:38.117 the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with [2024-12-09 18:10:01.091225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:20:38.117 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.117 [2024-12-09 18:10:01.091240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.117 [2024-12-09 18:10:01.091253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.117 [2024-12-09 18:10:01.091265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.117 [2024-12-09 18:10:01.091278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.117 [2024-12-09 18:10:01.091290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15456d0 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.117 [2024-12-09 18:10:01.091413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-09 18:10:01.091425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.117 the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.117 [2024-12-09 18:10:01.091455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.117 [2024-12-09 18:10:01.091468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.117 [2024-12-09 18:10:01.091480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.117 [2024-12-09 18:10:01.091493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.117 [2024-12-09 18:10:01.091505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.117 [2024-12-09 18:10:01.091517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f6310 is same [2024-12-09 18:10:01.091531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with with the state(6) to be set 00:20:38.117 the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.091696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd200 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.094870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.094907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.094923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.094936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.094949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.094962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.094974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.094985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.094997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.095009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.095022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.117 [2024-12-09 18:10:01.095034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.095735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd6d0 is same with the state(6) to be set 00:20:38.118 [2024-12-09 18:10:01.097222] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:38.118 [2024-12-09 18:10:01.097305] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:38.118 [2024-12-09 18:10:01.097617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.118 [2024-12-09 18:10:01.097644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.118 [2024-12-09 18:10:01.097685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.118 [2024-12-09 18:10:01.097701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.118 [2024-12-09 18:10:01.097718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.118 [2024-12-09 18:10:01.097732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.118 [2024-12-09 18:10:01.097748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.118 [2024-12-09 18:10:01.097762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.118 [2024-12-09 18:10:01.097778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.118 [2024-12-09 18:10:01.097791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.118 [2024-12-09 18:10:01.097807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.118 [2024-12-09 18:10:01.097821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.118 [2024-12-09 18:10:01.097836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.118 [2024-12-09 18:10:01.097850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.118 [2024-12-09 18:10:01.097864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.118 [2024-12-09 18:10:01.097883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.118 [2024-12-09 18:10:01.097900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.118 [2024-12-09 18:10:01.097914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.118 [2024-12-09 18:10:01.097929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.118 [2024-12-09 18:10:01.097942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.118 [2024-12-09 18:10:01.097958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.118 [2024-12-09 18:10:01.097971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.118 [2024-12-09 18:10:01.097986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.118 [2024-12-09 18:10:01.097999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.118 [2024-12-09 18:10:01.098014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.118 [2024-12-09 18:10:01.098027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.118 [2024-12-09 18:10:01.098043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.118 [2024-12-09 18:10:01.098056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.118 [2024-12-09 18:10:01.098071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.118 [2024-12-09 18:10:01.098084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.119 [2024-12-09 18:10:01.098099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.119 [2024-12-09 18:10:01.098113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.119 [2024-12-09 18:10:01.098129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.119 [2024-12-09 18:10:01.098142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.119 [2024-12-09 18:10:01.098157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.119 [2024-12-09 18:10:01.098171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.119 [2024-12-09 18:10:01.098187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.119 [2024-12-09 18:10:01.098200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.119 [2024-12-09 18:10:01.098215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.119 [2024-12-09 18:10:01.098213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with [2024-12-09 18:10:01.098229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:38.119 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.119 [2024-12-09 18:10:01.098254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.119 [2024-12-09 18:10:01.098258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with [2024-12-09 18:10:01.098268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:38.119 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.119 [2024-12-09 18:10:01.098286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.119 [2024-12-09 18:10:01.098287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.119 [2024-12-09 18:10:01.098299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.119 [2024-12-09 18:10:01.098309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with [2024-12-09 18:10:01.098315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1the state(6) to be set 00:20:38.119 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.119 [2024-12-09 18:10:01.098331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.119 [2024-12-09 18:10:01.098334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.119 [2024-12-09 18:10:01.098347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.119 [2024-12-09 18:10:01.098361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 18:10:01.098357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.119 the state(6) to be set 00:20:38.119 [2024-12-09 18:10:01.098380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.119 [2024-12-09 18:10:01.098382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.119 [2024-12-09 18:10:01.098393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.119 [2024-12-09 18:10:01.098409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.119 [2024-12-09 18:10:01.098406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.119 [2024-12-09 18:10:01.098423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.119 [2024-12-09 18:10:01.098429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.119 [2024-12-09 18:10:01.098438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.119 [2024-12-09 18:10:01.098453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.119 [2024-12-09 18:10:01.098454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.119 [2024-12-09 18:10:01.098468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.119 [2024-12-09 18:10:01.098476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with [2024-12-09 18:10:01.098482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:38.119 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.119 [2024-12-09 18:10:01.098505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.119 [2024-12-09 18:10:01.098506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.119 [2024-12-09 18:10:01.098518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.119 [2024-12-09 18:10:01.098528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with [2024-12-09 18:10:01.098534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:1the state(6) to be set 00:20:38.119 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.119 [2024-12-09 18:10:01.098557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.119 [2024-12-09 18:10:01.098560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.119 [2024-12-09 18:10:01.098574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.119 [2024-12-09 18:10:01.098588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.119 [2024-12-09 18:10:01.098585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.119 [2024-12-09 18:10:01.098606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 [2024-12-09 18:10:01.098621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 18:10:01.098617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.098639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 [2024-12-09 18:10:01.098642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.098653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 [2024-12-09 18:10:01.098672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128[2024-12-09 18:10:01.098668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.098688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 [2024-12-09 18:10:01.098693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.098704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 [2024-12-09 18:10:01.098718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 [2024-12-09 18:10:01.098716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.098733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 [2024-12-09 18:10:01.098739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with [2024-12-09 18:10:01.098747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:38.120 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 [2024-12-09 18:10:01.098768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 [2024-12-09 18:10:01.098768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.098782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 [2024-12-09 18:10:01.098792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with [2024-12-09 18:10:01.098797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128the state(6) to be set 00:20:38.120 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 [2024-12-09 18:10:01.098813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 [2024-12-09 18:10:01.098814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.098828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 [2024-12-09 18:10:01.098842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 18:10:01.098837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.098864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 [2024-12-09 18:10:01.098863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.098879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 [2024-12-09 18:10:01.098886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.098895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 [2024-12-09 18:10:01.098909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 [2024-12-09 18:10:01.098909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.098925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 [2024-12-09 18:10:01.098931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.098939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 [2024-12-09 18:10:01.098956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:12[2024-12-09 18:10:01.098953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.098973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 [2024-12-09 18:10:01.098977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.098990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 [2024-12-09 18:10:01.099004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 [2024-12-09 18:10:01.099004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.099020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 [2024-12-09 18:10:01.099028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.099035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 [2024-12-09 18:10:01.099052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 [2024-12-09 18:10:01.099060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with [2024-12-09 18:10:01.099066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:38.120 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 [2024-12-09 18:10:01.099087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 [2024-12-09 18:10:01.099086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.099100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 [2024-12-09 18:10:01.099110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with [2024-12-09 18:10:01.099116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:12the state(6) to be set 00:20:38.120 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 [2024-12-09 18:10:01.099134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 [2024-12-09 18:10:01.099136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.099149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 [2024-12-09 18:10:01.099157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with [2024-12-09 18:10:01.099164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:38.120 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 [2024-12-09 18:10:01.099182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 [2024-12-09 18:10:01.099182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.099196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 [2024-12-09 18:10:01.099212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 [2024-12-09 18:10:01.099215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.099226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 [2024-12-09 18:10:01.099242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:12[2024-12-09 18:10:01.099241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.099262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 [2024-12-09 18:10:01.099266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.099279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 [2024-12-09 18:10:01.099294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 18:10:01.099290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.099312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 [2024-12-09 18:10:01.099314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.099328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 [2024-12-09 18:10:01.099338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with [2024-12-09 18:10:01.099343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:12the state(6) to be set 00:20:38.120 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 [2024-12-09 18:10:01.099359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 [2024-12-09 18:10:01.099360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.120 [2024-12-09 18:10:01.099376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.120 [2024-12-09 18:10:01.099384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with [2024-12-09 18:10:01.099390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:38.120 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.120 [2024-12-09 18:10:01.099408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.121 [2024-12-09 18:10:01.099408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.099423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.121 [2024-12-09 18:10:01.099431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.099438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.121 [2024-12-09 18:10:01.099454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.121 [2024-12-09 18:10:01.099453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.099469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.121 [2024-12-09 18:10:01.099476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.099484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.121 [2024-12-09 18:10:01.099500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with [2024-12-09 18:10:01.099505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:12the state(6) to be set 00:20:38.121 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.121 [2024-12-09 18:10:01.099522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.121 [2024-12-09 18:10:01.099523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.099538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.121 [2024-12-09 18:10:01.099555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with [2024-12-09 18:10:01.099560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:38.121 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.121 [2024-12-09 18:10:01.099581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.121 [2024-12-09 18:10:01.099580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.099602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.121 [2024-12-09 18:10:01.099608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.099618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.121 [2024-12-09 18:10:01.099633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.121 [2024-12-09 18:10:01.099630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.099649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.121 [2024-12-09 18:10:01.099653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.099663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.121 [2024-12-09 18:10:01.099686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.099709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.099719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:38.121 [2024-12-09 18:10:01.099731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.099752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.099775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be090 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.100838] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:38.121 [2024-12-09 18:10:01.101271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.121 [2024-12-09 18:10:01.101785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.101797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.101809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.101821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.101833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.101845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.101857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.101869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.101881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.101893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.101905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.101917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.101929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.101940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.101952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.101964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.101976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.101988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.101999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.102015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.102027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.102039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.102066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.102078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.102090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18be410 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.103017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:38.122 [2024-12-09 18:10:01.103098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f2bc0 (9): Bad file descriptor 00:20:38.122 [2024-12-09 18:10:01.103158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15456d0 (9): Bad file descriptor 00:20:38.122 [2024-12-09 18:10:01.103238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.122 [2024-12-09 18:10:01.103261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.122 [2024-12-09 18:10:01.103277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.122 [2024-12-09 18:10:01.103291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.122 [2024-12-09 18:10:01.103305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.122 [2024-12-09 18:10:01.103319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.122 [2024-12-09 18:10:01.103334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.122 [2024-12-09 18:10:01.103347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.122 [2024-12-09 18:10:01.103361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2490 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.103406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.122 [2024-12-09 18:10:01.103427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.122 [2024-12-09 18:10:01.103442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.122 [2024-12-09 18:10:01.103457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.122 [2024-12-09 18:10:01.103471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.122 [2024-12-09 18:10:01.103484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.122 [2024-12-09 18:10:01.103497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.122 [2024-12-09 18:10:01.103510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.122 [2024-12-09 18:10:01.103528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5460 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.103568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f6310 (9): Bad file descriptor 00:20:38.122 [2024-12-09 18:10:01.103631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.122 [2024-12-09 18:10:01.103652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.122 [2024-12-09 18:10:01.103666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.122 [2024-12-09 18:10:01.103680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.122 [2024-12-09 18:10:01.103702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.122 [2024-12-09 18:10:01.103715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.122 [2024-12-09 18:10:01.103729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.122 [2024-12-09 18:10:01.103742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.122 [2024-12-09 18:10:01.103761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e80 is same with the state(6) to be set 00:20:38.122 [2024-12-09 18:10:01.104259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.122 [2024-12-09 18:10:01.104284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.122 [2024-12-09 18:10:01.104305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.122 [2024-12-09 18:10:01.104320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.122 [2024-12-09 18:10:01.104336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.122 [2024-12-09 18:10:01.104350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.122 [2024-12-09 18:10:01.104366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.122 [2024-12-09 18:10:01.104380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.122 [2024-12-09 18:10:01.104396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.122 [2024-12-09 18:10:01.104409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.122 [2024-12-09 18:10:01.104425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.122 [2024-12-09 18:10:01.104439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.122 [2024-12-09 18:10:01.104454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.122 [2024-12-09 18:10:01.104468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.122 [2024-12-09 18:10:01.104495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.122 [2024-12-09 18:10:01.104510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.122 [2024-12-09 18:10:01.104525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.122 [2024-12-09 18:10:01.104540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.122 [2024-12-09 18:10:01.104566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.122 [2024-12-09 18:10:01.104581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.122 [2024-12-09 18:10:01.104599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.123 [2024-12-09 18:10:01.104613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.123 [2024-12-09 18:10:01.104629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.123 [2024-12-09 18:10:01.104643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.123 [2024-12-09 18:10:01.104667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.123 [2024-12-09 18:10:01.104681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.123 [2024-12-09 18:10:01.104697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.123 [2024-12-09 18:10:01.104711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.123 [2024-12-09 18:10:01.104730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.123 [2024-12-09 18:10:01.104733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with [2024-12-09 18:10:01.104745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:38.123 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.123 [2024-12-09 18:10:01.104765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.123 [2024-12-09 18:10:01.104772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.104787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.123 [2024-12-09 18:10:01.104789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.104802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.104803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.123 [2024-12-09 18:10:01.104814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.104818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.123 [2024-12-09 18:10:01.104827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with [2024-12-09 18:10:01.104834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:1the state(6) to be set 00:20:38.123 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.123 [2024-12-09 18:10:01.104855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 18:10:01.104854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.123 the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.104870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.104872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.123 [2024-12-09 18:10:01.104883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.104887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.123 [2024-12-09 18:10:01.104895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.104903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.123 [2024-12-09 18:10:01.104909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.104917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.123 [2024-12-09 18:10:01.104921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.104933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:1[2024-12-09 18:10:01.104934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.123 the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.104948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 18:10:01.104948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.123 the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.104962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.104965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.123 [2024-12-09 18:10:01.104974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.104979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.123 [2024-12-09 18:10:01.104987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.104995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.123 [2024-12-09 18:10:01.104999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.105009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.123 [2024-12-09 18:10:01.105012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.105024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with [2024-12-09 18:10:01.105024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:1the state(6) to be set 00:20:38.123 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.123 [2024-12-09 18:10:01.105040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with [2024-12-09 18:10:01.105042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:38.123 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.123 [2024-12-09 18:10:01.105055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.105059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.123 [2024-12-09 18:10:01.105067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.105073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.123 [2024-12-09 18:10:01.105079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.105104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.123 [2024-12-09 18:10:01.105107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.105118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 18:10:01.105120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.123 the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.105133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.105135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.123 [2024-12-09 18:10:01.105145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.105149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.123 [2024-12-09 18:10:01.105156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.105164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.123 [2024-12-09 18:10:01.105168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.105178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.123 [2024-12-09 18:10:01.105180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.105192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.105193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.123 [2024-12-09 18:10:01.105204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.105207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.123 [2024-12-09 18:10:01.105216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.105223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.123 [2024-12-09 18:10:01.105231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.105237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.123 [2024-12-09 18:10:01.105244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.105252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.123 [2024-12-09 18:10:01.105255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.105265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.123 [2024-12-09 18:10:01.105268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.123 [2024-12-09 18:10:01.105280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1[2024-12-09 18:10:01.105281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.123 the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 [2024-12-09 18:10:01.105306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128[2024-12-09 18:10:01.105318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 18:10:01.105331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 [2024-12-09 18:10:01.105357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 [2024-12-09 18:10:01.105369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 [2024-12-09 18:10:01.105380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 [2024-12-09 18:10:01.105392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 [2024-12-09 18:10:01.105407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 [2024-12-09 18:10:01.105420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 [2024-12-09 18:10:01.105444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 [2024-12-09 18:10:01.105456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 [2024-12-09 18:10:01.105468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 [2024-12-09 18:10:01.105480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128[2024-12-09 18:10:01.105492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 18:10:01.105522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 [2024-12-09 18:10:01.105557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 [2024-12-09 18:10:01.105571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 [2024-12-09 18:10:01.105583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with the state(6) to be set 00:20:38.124 [2024-12-09 18:10:01.105591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 [2024-12-09 18:10:01.105606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with [2024-12-09 18:10:01.105609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128the state(6) to be set 00:20:38.124 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 [2024-12-09 18:10:01.105623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bec60 is same with [2024-12-09 18:10:01.105625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:38.124 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 [2024-12-09 18:10:01.105642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 [2024-12-09 18:10:01.105655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 [2024-12-09 18:10:01.105679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 [2024-12-09 18:10:01.105692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 [2024-12-09 18:10:01.105707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 [2024-12-09 18:10:01.105721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 [2024-12-09 18:10:01.105735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 [2024-12-09 18:10:01.105749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 [2024-12-09 18:10:01.105763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 [2024-12-09 18:10:01.105776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 [2024-12-09 18:10:01.105791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 [2024-12-09 18:10:01.105805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 [2024-12-09 18:10:01.105820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 [2024-12-09 18:10:01.105833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 [2024-12-09 18:10:01.105847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 [2024-12-09 18:10:01.105861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 [2024-12-09 18:10:01.105891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 [2024-12-09 18:10:01.105905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 [2024-12-09 18:10:01.105919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 [2024-12-09 18:10:01.105932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 [2024-12-09 18:10:01.105946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 [2024-12-09 18:10:01.105959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 [2024-12-09 18:10:01.105981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 [2024-12-09 18:10:01.105995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.124 [2024-12-09 18:10:01.106010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.124 [2024-12-09 18:10:01.106023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.125 [2024-12-09 18:10:01.106037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.125 [2024-12-09 18:10:01.106050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.125 [2024-12-09 18:10:01.106065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.125 [2024-12-09 18:10:01.106078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.125 [2024-12-09 18:10:01.106092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.125 [2024-12-09 18:10:01.106106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.125 [2024-12-09 18:10:01.106120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.125 [2024-12-09 18:10:01.106134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.125 [2024-12-09 18:10:01.106148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.125 [2024-12-09 18:10:01.106161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.125 [2024-12-09 18:10:01.106175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.125 [2024-12-09 18:10:01.106189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.125 [2024-12-09 18:10:01.106203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.125 [2024-12-09 18:10:01.106216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.125 [2024-12-09 18:10:01.106230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.125 [2024-12-09 18:10:01.106243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.125 [2024-12-09 18:10:01.106257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.125 [2024-12-09 18:10:01.106270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.125 [2024-12-09 18:10:01.106306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:38.125 [2024-12-09 18:10:01.106392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106646] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:38.125 [2024-12-09 18:10:01.106672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.106994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.107909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d9a0 is same with the state(6) to be set 00:20:38.125 [2024-12-09 18:10:01.108327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:38.125 [2024-12-09 18:10:01.108395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1516fa0 (9): Bad file descriptor 00:20:38.126 [2024-12-09 18:10:01.108648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.126 [2024-12-09 18:10:01.108688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f2bc0 with addr=10.0.0.2, port=4420 00:20:38.126 [2024-12-09 18:10:01.108705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2bc0 is same with the state(6) to be set 00:20:38.126 [2024-12-09 18:10:01.109044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f2bc0 (9): Bad file descriptor 00:20:38.126 [2024-12-09 18:10:01.109150] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:38.126 [2024-12-09 18:10:01.109720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.126 [2024-12-09 18:10:01.109749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1516fa0 with addr=10.0.0.2, port=4420 00:20:38.126 [2024-12-09 18:10:01.109764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1516fa0 is same with the state(6) to be set 00:20:38.126 [2024-12-09 18:10:01.109780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:38.126 [2024-12-09 18:10:01.109793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:38.126 [2024-12-09 18:10:01.109810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:38.126 [2024-12-09 18:10:01.109827] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:38.126 [2024-12-09 18:10:01.109917] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:38.126 [2024-12-09 18:10:01.109993] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:38.126 [2024-12-09 18:10:01.110090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1516fa0 (9): Bad file descriptor 00:20:38.126 [2024-12-09 18:10:01.110234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:38.126 [2024-12-09 18:10:01.110260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:38.126 [2024-12-09 18:10:01.110276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:38.126 [2024-12-09 18:10:01.110291] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:38.126 [2024-12-09 18:10:01.110377] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:38.126 [2024-12-09 18:10:01.113066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.126 [2024-12-09 18:10:01.113096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.113112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.126 [2024-12-09 18:10:01.113126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.113139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.126 [2024-12-09 18:10:01.113152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.113165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.126 [2024-12-09 18:10:01.113178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.113191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15635a0 is same with the state(6) to be set 00:20:38.126 [2024-12-09 18:10:01.113247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.126 [2024-12-09 18:10:01.113267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.113282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.126 [2024-12-09 18:10:01.113295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.113308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.126 [2024-12-09 18:10:01.113321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.113335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.126 [2024-12-09 18:10:01.113348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.113360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154fc60 is same with the state(6) to be set 00:20:38.126 [2024-12-09 18:10:01.113408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.126 [2024-12-09 18:10:01.113428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.113443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.126 [2024-12-09 18:10:01.113456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.113469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.126 [2024-12-09 18:10:01.113483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.113496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.126 [2024-12-09 18:10:01.113509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.113526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105e110 is same with the state(6) to be set 00:20:38.126 [2024-12-09 18:10:01.113563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f2490 (9): Bad file descriptor 00:20:38.126 [2024-12-09 18:10:01.113595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f5460 (9): Bad file descriptor 00:20:38.126 [2024-12-09 18:10:01.113640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f5e80 (9): Bad file descriptor 00:20:38.126 [2024-12-09 18:10:01.113792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.126 [2024-12-09 18:10:01.113814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.113838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.126 [2024-12-09 18:10:01.113853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.113870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.126 [2024-12-09 18:10:01.113883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.113899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.126 [2024-12-09 18:10:01.113913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.113928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.126 [2024-12-09 18:10:01.113942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.113957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.126 [2024-12-09 18:10:01.113971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.113986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.126 [2024-12-09 18:10:01.114000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.114016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.126 [2024-12-09 18:10:01.114029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.114044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.126 [2024-12-09 18:10:01.114057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.114073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.126 [2024-12-09 18:10:01.114087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.114102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.126 [2024-12-09 18:10:01.114120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.114136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.126 [2024-12-09 18:10:01.114150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.114166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.126 [2024-12-09 18:10:01.114179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.114195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.126 [2024-12-09 18:10:01.114208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.114224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.126 [2024-12-09 18:10:01.114237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.114253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.126 [2024-12-09 18:10:01.114266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.114282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.126 [2024-12-09 18:10:01.114295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.126 [2024-12-09 18:10:01.114311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.114341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.114369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.114398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.114427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.114455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.114484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.114517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.114554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.114596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.114625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.114653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.114682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.114710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.114739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.114768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.114797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.114826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.114855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.114895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.114925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.114955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.114984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.114998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.115014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.115028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.115043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.115057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.115072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.115085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.115100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.115114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.115129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.115143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.115158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.115172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.115187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.115201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.115216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.115229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.115244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.115262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.115278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.115292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.115307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.115321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.115337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.115351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.115371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.115385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.115401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.115415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.115431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.115444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.115459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.115473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.115489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.127 [2024-12-09 18:10:01.115503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.127 [2024-12-09 18:10:01.115518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.115532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.115553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.115569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.115586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.115605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.115620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.115634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.115653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.115673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.115688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.115702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.115717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.115730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.115745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9f50 is same with the state(6) to be set 00:20:38.128 [2024-12-09 18:10:01.131459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.131529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.131577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.131605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.131622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.131637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.131653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.131668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.131684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.131697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.131714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.131728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.131743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.131757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.131772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.131786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.131801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.131824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.131856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.131870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.131886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.131900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.131916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.131929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.131945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.131958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.131974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.131988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.132003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.132017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.132032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.132046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.132062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.132075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.132090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.132104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.132119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.132133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.132149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.132163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.132178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.132192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.132208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.132226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.132242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.132255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.132272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.132286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.132301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.132315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.132330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.132344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.132360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.132373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.132389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.132402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.132417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.128 [2024-12-09 18:10:01.132431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.128 [2024-12-09 18:10:01.132447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.132460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.132476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.132489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.132505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.132519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.132534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.132557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.132575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.132592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.132612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.132626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.132642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.132656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.132671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.132685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.132700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.132714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.132729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.132742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.132757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.132771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.132786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.132799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.132815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.132831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.132847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.132860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.132876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.132889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.132905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.132918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.132934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.132947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.132963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.132981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.132997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.133011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.133026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.133040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.133055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.133069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.133084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.133097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.133114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.133127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.133144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.133157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.133173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.133186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.133202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.133216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.133231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.133245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.133260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.133273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.133289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.133303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.133318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.133331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.133350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.133372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.133397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.133417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.133434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.133448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.133464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.133477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.133493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.129 [2024-12-09 18:10:01.133511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.129 [2024-12-09 18:10:01.133534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x290d5b0 is same with the state(6) to be set 00:20:38.398 [2024-12-09 18:10:01.134819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:38.398 [2024-12-09 18:10:01.134856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:38.398 [2024-12-09 18:10:01.134969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15635a0 (9): Bad file descriptor 00:20:38.398 [2024-12-09 18:10:01.135013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154fc60 (9): Bad file descriptor 00:20:38.398 [2024-12-09 18:10:01.135047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x105e110 (9): Bad file descriptor 00:20:38.398 [2024-12-09 18:10:01.135368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.398 [2024-12-09 18:10:01.135405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f6310 with addr=10.0.0.2, port=4420 00:20:38.398 [2024-12-09 18:10:01.135424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f6310 is same with the state(6) to be set 00:20:38.398 [2024-12-09 18:10:01.135531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.398 [2024-12-09 18:10:01.135586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15456d0 with addr=10.0.0.2, port=4420 00:20:38.398 [2024-12-09 18:10:01.135605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15456d0 is same with the state(6) to be set 00:20:38.398 [2024-12-09 18:10:01.135964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.398 [2024-12-09 18:10:01.135987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.398 [2024-12-09 18:10:01.136008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.398 [2024-12-09 18:10:01.136023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.398 [2024-12-09 18:10:01.136039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.398 [2024-12-09 18:10:01.136059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.398 [2024-12-09 18:10:01.136076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.398 [2024-12-09 18:10:01.136090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.398 [2024-12-09 18:10:01.136106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.398 [2024-12-09 18:10:01.136119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.398 [2024-12-09 18:10:01.136142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.398 [2024-12-09 18:10:01.136166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.398 [2024-12-09 18:10:01.136184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.398 [2024-12-09 18:10:01.136198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.398 [2024-12-09 18:10:01.136214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.398 [2024-12-09 18:10:01.136228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.398 [2024-12-09 18:10:01.136244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.398 [2024-12-09 18:10:01.136257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.398 [2024-12-09 18:10:01.136273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.398 [2024-12-09 18:10:01.136286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.398 [2024-12-09 18:10:01.136302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.398 [2024-12-09 18:10:01.136315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.398 [2024-12-09 18:10:01.136342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.398 [2024-12-09 18:10:01.136360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.398 [2024-12-09 18:10:01.136376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.136389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.136405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.136419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.136435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.136448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.136468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.136483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.136499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.136515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.136539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.136563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.136590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.136604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.136619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.136639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.136659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.136673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.136689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.136702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.136717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.136731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.136746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.136760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.136775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.136788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.136805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.136822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.136839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.136853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.136869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.136891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.136907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.136921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.136937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.136951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.136966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.136981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.136996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.137975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.137988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.138002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb0c0 is same with the state(6) to be set 00:20:38.399 [2024-12-09 18:10:01.139265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.139293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.139314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.139329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.139345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.139358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.139374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.139388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.139403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.139417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.139432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.139446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.139461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.139474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.139490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.139504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.139519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.139533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.139556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.139572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.139588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.139601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.139616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.139630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.139655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.139669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.399 [2024-12-09 18:10:01.139688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.399 [2024-12-09 18:10:01.139702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.139718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.139731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.139747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.139760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.139776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.139790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.139805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.139818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.139841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.139854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.139870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.139883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.139898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.139912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.139928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.139941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.139956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.139969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.139985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.139998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.140972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.140986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.141002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.141015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.141030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.141044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.141059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.141073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.141094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.141108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.141128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.141142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.141157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.141171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.141187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.141200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.141220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.141234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.141249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ca260 is same with the state(6) to be set 00:20:38.400 [2024-12-09 18:10:01.142516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.142540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.142568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.142595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.142610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.142624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.142640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.142654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.142677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.142691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.142706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.142721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.142737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.142751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.142766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.142780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.142795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.142809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.142825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.142838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.142854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.142868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.142884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.142903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.142920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.142934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.142949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.142963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.142978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.142991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.143007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.143020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.143036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.143049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.143064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.143078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.143093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.143107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.143122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.143135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.143150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.143164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.143179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.143193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.143208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.143221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.143236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.400 [2024-12-09 18:10:01.143249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.400 [2024-12-09 18:10:01.143269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.143971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.143987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.144001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.144016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.144029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.144048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.144063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.144078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.144092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.144107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.144121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.144136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.144150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.144165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.144179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.144194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.144207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.144223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.144236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.144251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.144265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.144280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.144294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.144309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.144323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.144339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.144352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.144368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.144382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.144397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.144414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.144431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.144444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.144460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.144474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.144487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f8410 is same with the state(6) to be set 00:20:38.401 [2024-12-09 18:10:01.146083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:38.401 [2024-12-09 18:10:01.146118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:38.401 [2024-12-09 18:10:01.146146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:38.401 [2024-12-09 18:10:01.146168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:38.401 [2024-12-09 18:10:01.146189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:38.401 [2024-12-09 18:10:01.146276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f6310 (9): Bad file descriptor 00:20:38.401 [2024-12-09 18:10:01.146303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15456d0 (9): Bad file descriptor 00:20:38.401 [2024-12-09 18:10:01.146410] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:20:38.401 [2024-12-09 18:10:01.146437] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:20:38.401 [2024-12-09 18:10:01.146830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.401 [2024-12-09 18:10:01.146861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f2bc0 with addr=10.0.0.2, port=4420 00:20:38.401 [2024-12-09 18:10:01.146878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2bc0 is same with the state(6) to be set 00:20:38.401 [2024-12-09 18:10:01.146985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.401 [2024-12-09 18:10:01.147011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1516fa0 with addr=10.0.0.2, port=4420 00:20:38.401 [2024-12-09 18:10:01.147027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1516fa0 is same with the state(6) to be set 00:20:38.401 [2024-12-09 18:10:01.147112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.401 [2024-12-09 18:10:01.147137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f5e80 with addr=10.0.0.2, port=4420 00:20:38.401 [2024-12-09 18:10:01.147152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e80 is same with the state(6) to be set 00:20:38.401 [2024-12-09 18:10:01.147236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.401 [2024-12-09 18:10:01.147260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f2490 with addr=10.0.0.2, port=4420 00:20:38.401 [2024-12-09 18:10:01.147275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2490 is same with the state(6) to be set 00:20:38.401 [2024-12-09 18:10:01.147364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.401 [2024-12-09 18:10:01.147394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f5460 with addr=10.0.0.2, port=4420 00:20:38.401 [2024-12-09 18:10:01.147410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5460 is same with the state(6) to be set 00:20:38.401 [2024-12-09 18:10:01.147426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:38.401 [2024-12-09 18:10:01.147439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:38.401 [2024-12-09 18:10:01.147456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:38.401 [2024-12-09 18:10:01.147473] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:38.401 [2024-12-09 18:10:01.147489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:38.401 [2024-12-09 18:10:01.147501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:38.401 [2024-12-09 18:10:01.147513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:38.401 [2024-12-09 18:10:01.147524] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:38.401 [2024-12-09 18:10:01.148422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.148446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.148469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.148485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.148501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.148515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.148531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.148552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.148571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.148585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.148600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.148614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.148630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.148643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.148659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.148672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.148688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.148712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.148730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.148744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.148759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.148772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.148789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.148802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.148818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.148832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.148848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.401 [2024-12-09 18:10:01.148862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.401 [2024-12-09 18:10:01.148877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.148891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.148907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.148920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.148936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.148950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.148966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.148980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.148995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.149982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.149995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.150011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.150024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.150039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.150052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.150067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.150081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.150097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.150110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.150126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.150139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.150155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.150168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.150184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.150209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.159245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.159306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.159323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.159338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.159354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.159368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.159384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.159398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.159413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.159427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.159443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f96d0 is same with the state(6) to be set 00:20:38.402 [2024-12-09 18:10:01.160875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.160901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.160925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.160940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.160956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.160971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.160987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.402 [2024-12-09 18:10:01.161690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-12-09 18:10:01.161704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.161719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.161732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.161748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.161761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.161776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.161790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.161805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.161819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.161834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.161852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.161868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.161881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.161897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.161910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.161925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.161939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.161955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.161968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.161984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.161997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.162799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.162812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fbca0 is same with the state(6) to be set 00:20:38.403 [2024-12-09 18:10:01.164053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.164978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.164994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-12-09 18:10:01.165007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.403 [2024-12-09 18:10:01.165023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.404 [2024-12-09 18:10:01.165941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.404 [2024-12-09 18:10:01.165955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fcf90 is same with the state(6) to be set 00:20:38.404 [2024-12-09 18:10:01.167632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:38.404 [2024-12-09 18:10:01.167669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:38.404 task offset: 28672 on job bdev=Nvme4n1 fails 00:20:38.404 00:20:38.404 Latency(us) 00:20:38.404 [2024-12-09T17:10:01.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.404 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.404 Job: Nvme1n1 ended in about 0.91 seconds with error 00:20:38.404 Verification LBA range: start 0x0 length 0x400 00:20:38.404 Nvme1n1 : 0.91 140.76 8.80 70.38 0.00 299715.95 28350.39 260978.92 00:20:38.404 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.404 Job: Nvme2n1 ended in about 0.92 seconds with error 00:20:38.404 Verification LBA range: start 0x0 length 0x400 00:20:38.404 Nvme2n1 : 0.92 139.54 8.72 69.77 0.00 296170.64 21554.06 259425.47 00:20:38.404 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.404 Job: Nvme3n1 ended in about 0.92 seconds with error 00:20:38.404 Verification LBA range: start 0x0 length 0x400 00:20:38.404 Nvme3n1 : 0.92 208.58 13.04 69.53 0.00 218274.70 18544.26 253211.69 00:20:38.404 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.404 Job: Nvme4n1 ended in about 0.88 seconds with error 00:20:38.404 Verification LBA range: start 0x0 length 0x400 00:20:38.404 Nvme4n1 : 0.88 218.11 13.63 72.70 0.00 203586.80 3689.43 254765.13 00:20:38.404 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.404 Job: Nvme5n1 ended in about 0.92 seconds with error 00:20:38.404 Verification LBA range: start 0x0 length 0x400 00:20:38.404 Nvme5n1 : 0.92 138.56 8.66 69.28 0.00 279797.76 20291.89 256318.58 00:20:38.404 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.404 Job: Nvme6n1 ended in about 0.94 seconds with error 00:20:38.404 Verification LBA range: start 0x0 length 0x400 00:20:38.404 Nvme6n1 : 0.94 136.35 8.52 68.18 0.00 278410.18 40777.96 265639.25 00:20:38.404 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.404 Job: Nvme7n1 ended in about 0.89 seconds with error 00:20:38.404 Verification LBA range: start 0x0 length 0x400 00:20:38.404 Nvme7n1 : 0.89 216.60 13.54 72.20 0.00 191177.67 3155.44 254765.13 00:20:38.404 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.404 Job: Nvme8n1 ended in about 0.94 seconds with error 00:20:38.404 Verification LBA range: start 0x0 length 0x400 00:20:38.404 Nvme8n1 : 0.94 135.88 8.49 67.94 0.00 267549.71 17573.36 290494.39 00:20:38.404 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.404 Job: Nvme9n1 ended in about 0.95 seconds with error 00:20:38.404 Verification LBA range: start 0x0 length 0x400 00:20:38.404 Nvme9n1 : 0.95 135.43 8.46 67.71 0.00 262557.90 22719.15 267192.70 00:20:38.404 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.404 Job: Nvme10n1 ended in about 0.91 seconds with error 00:20:38.404 Verification LBA range: start 0x0 length 0x400 00:20:38.404 Nvme10n1 : 0.91 140.22 8.76 70.11 0.00 245814.87 22427.88 282727.16 00:20:38.404 [2024-12-09T17:10:01.445Z] =================================================================================================================== 00:20:38.404 [2024-12-09T17:10:01.445Z] Total : 1610.04 100.63 697.80 0.00 249763.87 3155.44 290494.39 00:20:38.404 [2024-12-09 18:10:01.197227] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:38.404 [2024-12-09 18:10:01.197318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:38.404 [2024-12-09 18:10:01.197431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f2bc0 (9): Bad file descriptor 00:20:38.404 [2024-12-09 18:10:01.197464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1516fa0 (9): Bad file descriptor 00:20:38.404 [2024-12-09 18:10:01.197483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f5e80 (9): Bad file descriptor 00:20:38.404 [2024-12-09 18:10:01.197502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f2490 (9): Bad file descriptor 00:20:38.404 [2024-12-09 18:10:01.197519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f5460 (9): Bad file descriptor 00:20:38.404 [2024-12-09 18:10:01.197596] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:20:38.404 [2024-12-09 18:10:01.197623] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:20:38.404 [2024-12-09 18:10:01.197658] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:20:38.404 [2024-12-09 18:10:01.197678] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:20:38.404 [2024-12-09 18:10:01.197696] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:20:38.404 [2024-12-09 18:10:01.198147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.404 [2024-12-09 18:10:01.198185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x105e110 with addr=10.0.0.2, port=4420 00:20:38.404 [2024-12-09 18:10:01.198207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105e110 is same with the state(6) to be set 00:20:38.404 [2024-12-09 18:10:01.198311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.404 [2024-12-09 18:10:01.198338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15635a0 with addr=10.0.0.2, port=4420 00:20:38.404 [2024-12-09 18:10:01.198354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15635a0 is same with the state(6) to be set 00:20:38.404 [2024-12-09 18:10:01.198441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.404 [2024-12-09 18:10:01.198465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x154fc60 with addr=10.0.0.2, port=4420 00:20:38.404 [2024-12-09 18:10:01.198480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154fc60 is same with the state(6) to be set 00:20:38.404 [2024-12-09 18:10:01.198496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:38.404 [2024-12-09 18:10:01.198509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:38.404 [2024-12-09 18:10:01.198526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:38.404 [2024-12-09 18:10:01.198542] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:38.404 [2024-12-09 18:10:01.198569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:38.404 [2024-12-09 18:10:01.198582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:38.404 [2024-12-09 18:10:01.198594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:38.404 [2024-12-09 18:10:01.198606] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:38.404 [2024-12-09 18:10:01.198620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:38.404 [2024-12-09 18:10:01.198631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:38.404 [2024-12-09 18:10:01.198643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:38.404 [2024-12-09 18:10:01.198655] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:38.404 [2024-12-09 18:10:01.198668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:38.404 [2024-12-09 18:10:01.198680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:38.404 [2024-12-09 18:10:01.198692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:38.404 [2024-12-09 18:10:01.198703] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:38.404 [2024-12-09 18:10:01.198716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:38.404 [2024-12-09 18:10:01.198733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:38.404 [2024-12-09 18:10:01.198747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:38.404 [2024-12-09 18:10:01.198758] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:38.404 [2024-12-09 18:10:01.198789] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:20:38.404 [2024-12-09 18:10:01.198811] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:20:38.404 [2024-12-09 18:10:01.199753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:38.404 [2024-12-09 18:10:01.199782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:38.404 [2024-12-09 18:10:01.199856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x105e110 (9): Bad file descriptor 00:20:38.404 [2024-12-09 18:10:01.199881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15635a0 (9): Bad file descriptor 00:20:38.404 [2024-12-09 18:10:01.199898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154fc60 (9): Bad file descriptor 00:20:38.404 [2024-12-09 18:10:01.199967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:38.404 [2024-12-09 18:10:01.199990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:38.404 [2024-12-09 18:10:01.200007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:38.404 [2024-12-09 18:10:01.200023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:38.404 [2024-12-09 18:10:01.200038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:38.404 [2024-12-09 18:10:01.200196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.404 [2024-12-09 18:10:01.200223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15456d0 with addr=10.0.0.2, port=4420 00:20:38.404 [2024-12-09 18:10:01.200239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15456d0 is same with the state(6) to be set 00:20:38.404 [2024-12-09 18:10:01.200321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.404 [2024-12-09 18:10:01.200346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f6310 with addr=10.0.0.2, port=4420 00:20:38.404 [2024-12-09 18:10:01.200362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f6310 is same with the state(6) to be set 00:20:38.404 [2024-12-09 18:10:01.200376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:38.404 [2024-12-09 18:10:01.200388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:38.404 [2024-12-09 18:10:01.200400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:38.404 [2024-12-09 18:10:01.200413] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:38.404 [2024-12-09 18:10:01.200427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:38.404 [2024-12-09 18:10:01.200439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:38.404 [2024-12-09 18:10:01.200451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:38.404 [2024-12-09 18:10:01.200463] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:38.404 [2024-12-09 18:10:01.200481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:38.404 [2024-12-09 18:10:01.200495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:38.404 [2024-12-09 18:10:01.200507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:38.405 [2024-12-09 18:10:01.200518] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:38.405 [2024-12-09 18:10:01.200673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.405 [2024-12-09 18:10:01.200700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f5460 with addr=10.0.0.2, port=4420 00:20:38.405 [2024-12-09 18:10:01.200715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5460 is same with the state(6) to be set 00:20:38.405 [2024-12-09 18:10:01.200796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.405 [2024-12-09 18:10:01.200820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f2490 with addr=10.0.0.2, port=4420 00:20:38.405 [2024-12-09 18:10:01.200836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2490 is same with the state(6) to be set 00:20:38.405 [2024-12-09 18:10:01.200908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.405 [2024-12-09 18:10:01.200932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f5e80 with addr=10.0.0.2, port=4420 00:20:38.405 [2024-12-09 18:10:01.200948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e80 is same with the state(6) to be set 00:20:38.405 [2024-12-09 18:10:01.201064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.405 [2024-12-09 18:10:01.201088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1516fa0 with addr=10.0.0.2, port=4420 00:20:38.405 [2024-12-09 18:10:01.201103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1516fa0 is same with the state(6) to be set 00:20:38.405 [2024-12-09 18:10:01.201178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.405 [2024-12-09 18:10:01.201202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f2bc0 with addr=10.0.0.2, port=4420 00:20:38.405 [2024-12-09 18:10:01.201217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2bc0 is same with the state(6) to be set 00:20:38.405 [2024-12-09 18:10:01.201235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15456d0 (9): Bad file descriptor 00:20:38.405 [2024-12-09 18:10:01.201254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f6310 (9): Bad file descriptor 00:20:38.405 [2024-12-09 18:10:01.201300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f5460 (9): Bad file descriptor 00:20:38.405 [2024-12-09 18:10:01.201324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f2490 (9): Bad file descriptor 00:20:38.405 [2024-12-09 18:10:01.201341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f5e80 (9): Bad file descriptor 00:20:38.405 [2024-12-09 18:10:01.201357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1516fa0 (9): Bad file descriptor 00:20:38.405 [2024-12-09 18:10:01.201374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f2bc0 (9): Bad file descriptor 00:20:38.405 [2024-12-09 18:10:01.201389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:38.405 [2024-12-09 18:10:01.201402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:38.405 [2024-12-09 18:10:01.201414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:38.405 [2024-12-09 18:10:01.201431] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:38.405 [2024-12-09 18:10:01.201445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:38.405 [2024-12-09 18:10:01.201458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:38.405 [2024-12-09 18:10:01.201470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:38.405 [2024-12-09 18:10:01.201482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:38.405 [2024-12-09 18:10:01.201519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:38.405 [2024-12-09 18:10:01.201536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:38.405 [2024-12-09 18:10:01.201559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:38.405 [2024-12-09 18:10:01.201573] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:38.405 [2024-12-09 18:10:01.201587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:38.405 [2024-12-09 18:10:01.201599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:38.405 [2024-12-09 18:10:01.201611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:38.405 [2024-12-09 18:10:01.201623] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:38.405 [2024-12-09 18:10:01.201635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:38.405 [2024-12-09 18:10:01.201647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:38.405 [2024-12-09 18:10:01.201660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:38.405 [2024-12-09 18:10:01.201671] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:38.405 [2024-12-09 18:10:01.201683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:38.405 [2024-12-09 18:10:01.201695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:38.405 [2024-12-09 18:10:01.201708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:38.405 [2024-12-09 18:10:01.201719] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:38.405 [2024-12-09 18:10:01.201732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:38.405 [2024-12-09 18:10:01.201744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:38.405 [2024-12-09 18:10:01.201757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:38.405 [2024-12-09 18:10:01.201768] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:38.663 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:20:39.598 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1517290 00:20:39.598 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:20:39.599 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1517290 00:20:39.599 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:39.599 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:39.599 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:20:39.599 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:39.599 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1517290 00:20:39.599 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:20:39.599 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:39.599 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:20:39.599 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:20:39.599 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:20:39.599 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:39.599 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:39.859 rmmod nvme_tcp 00:20:39.859 rmmod nvme_fabrics 00:20:39.859 rmmod nvme_keyring 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1517113 ']' 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1517113 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1517113 ']' 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1517113 00:20:39.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1517113) - No such process 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1517113 is not found' 00:20:39.859 Process with pid 1517113 is not found 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.859 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.769 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:41.769 00:20:41.769 real 0m7.876s 00:20:41.769 user 0m20.466s 00:20:41.769 sys 0m1.484s 00:20:41.769 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.769 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:41.769 ************************************ 00:20:41.769 END TEST nvmf_shutdown_tc3 00:20:41.769 ************************************ 00:20:41.769 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:20:41.769 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:20:41.769 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:20:41.769 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:41.769 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.769 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:41.769 ************************************ 00:20:41.769 START TEST nvmf_shutdown_tc4 00:20:41.769 ************************************ 00:20:41.769 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:20:41.769 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:20:41.769 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:41.769 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:41.769 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.769 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:41.769 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:41.769 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:41.769 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.769 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.769 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:42.029 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:42.029 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:42.029 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:42.029 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:42.029 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.030 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:42.030 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:42.030 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:42.030 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:42.030 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:42.030 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:42.030 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:42.030 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.030 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:42.030 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:42.030 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:42.030 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:42.030 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:42.030 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:42.030 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:42.030 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:42.030 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:42.030 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:42.030 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:42.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:42.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:20:42.288 00:20:42.288 --- 10.0.0.2 ping statistics --- 00:20:42.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.288 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:42.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:42.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:20:42.288 00:20:42.288 --- 10.0.0.1 ping statistics --- 00:20:42.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.288 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1518215 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1518215 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1518215 ']' 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.288 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:42.288 [2024-12-09 18:10:05.156006] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:20:42.288 [2024-12-09 18:10:05.156080] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.288 [2024-12-09 18:10:05.230207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:42.288 [2024-12-09 18:10:05.289121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.288 [2024-12-09 18:10:05.289168] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.288 [2024-12-09 18:10:05.289198] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.288 [2024-12-09 18:10:05.289210] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.288 [2024-12-09 18:10:05.289220] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.288 [2024-12-09 18:10:05.290802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.288 [2024-12-09 18:10:05.291006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:42.288 [2024-12-09 18:10:05.291103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:42.288 [2024-12-09 18:10:05.291107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:42.547 [2024-12-09 18:10:05.435191] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.547 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:42.547 Malloc1 00:20:42.547 [2024-12-09 18:10:05.517238] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.547 Malloc2 00:20:42.807 Malloc3 00:20:42.807 Malloc4 00:20:42.807 Malloc5 00:20:42.807 Malloc6 00:20:42.807 Malloc7 00:20:42.807 Malloc8 00:20:43.066 Malloc9 00:20:43.066 Malloc10 00:20:43.066 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.066 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:43.066 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.067 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:43.067 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1518384 00:20:43.067 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:20:43.067 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:20:43.067 [2024-12-09 18:10:06.053996] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:48.348 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:48.348 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1518215 00:20:48.348 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1518215 ']' 00:20:48.348 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1518215 00:20:48.348 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:20:48.348 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.348 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1518215 00:20:48.348 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:48.348 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:48.348 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1518215' 00:20:48.348 killing process with pid 1518215 00:20:48.348 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1518215 00:20:48.348 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1518215 00:20:48.348 [2024-12-09 18:10:11.037476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698050 is same with the state(6) to be set 00:20:48.348 [2024-12-09 18:10:11.037602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698050 is same with the state(6) to be set 00:20:48.348 [2024-12-09 18:10:11.037625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698050 is same with the state(6) to be set 00:20:48.348 [2024-12-09 18:10:11.037639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698050 is same with the state(6) to be set 00:20:48.348 [2024-12-09 18:10:11.037651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698050 is same with the state(6) to be set 00:20:48.348 [2024-12-09 18:10:11.037663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698050 is same with the state(6) to be set 00:20:48.348 Write completed with error (sct=0, sc=8) 00:20:48.348 Write completed with error (sct=0, sc=8) 00:20:48.348 Write completed with error (sct=0, sc=8) 00:20:48.348 starting I/O failed: -6 00:20:48.348 Write completed with error (sct=0, sc=8) 00:20:48.348 Write completed with error (sct=0, sc=8) 00:20:48.348 Write completed with error (sct=0, sc=8) 00:20:48.348 Write completed with error (sct=0, sc=8) 00:20:48.348 starting I/O failed: -6 00:20:48.348 Write completed with error (sct=0, sc=8) 00:20:48.348 Write completed with error (sct=0, sc=8) 00:20:48.348 Write completed with error (sct=0, sc=8) 00:20:48.348 Write completed with error (sct=0, sc=8) 00:20:48.348 starting I/O failed: -6 00:20:48.348 Write completed with error (sct=0, sc=8) 00:20:48.348 Write completed with error (sct=0, sc=8) 00:20:48.348 Write completed with error (sct=0, sc=8) 00:20:48.348 Write completed with error (sct=0, sc=8) 00:20:48.348 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 [2024-12-09 18:10:11.049679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.349 starting I/O failed: -6 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 [2024-12-09 18:10:11.050854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 [2024-12-09 18:10:11.052119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 [2024-12-09 18:10:11.052300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1734670 is same with the state(6) to be set 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 [2024-12-09 18:10:11.052341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1734670 is same with Write completed with error (sct=0, sc=8) 00:20:48.349 the state(6) to be set 00:20:48.349 starting I/O failed: -6 00:20:48.349 [2024-12-09 18:10:11.052359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1734670 is same with the state(6) to be set 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 [2024-12-09 18:10:11.052373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1734670 is same with the state(6) to be set 00:20:48.349 [2024-12-09 18:10:11.052387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1734670 is same with Write completed with error (sct=0, sc=8) 00:20:48.349 the state(6) to be set 00:20:48.349 starting I/O failed: -6 00:20:48.349 [2024-12-09 18:10:11.052402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1734670 is same with the state(6) to be set 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 [2024-12-09 18:10:11.052414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1734670 is same with the state(6) to be set 00:20:48.349 starting I/O failed: -6 00:20:48.349 [2024-12-09 18:10:11.052426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1734670 is same with the state(6) to be set 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 [2024-12-09 18:10:11.052439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1734670 is same with the state(6) to be set 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.349 Write completed with error (sct=0, sc=8) 00:20:48.349 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 [2024-12-09 18:10:11.053782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:48.350 NVMe io qpair process completion error 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 [2024-12-09 18:10:11.054538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694b60 is same with the state(6) to be set 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 [2024-12-09 18:10:11.054605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694b60 is same with the state(6) to be set 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 [2024-12-09 18:10:11.054622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694b60 is same with the state(6) to be set 00:20:48.350 starting I/O failed: -6 00:20:48.350 [2024-12-09 18:10:11.054636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694b60 is same with the state(6) to be set 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 [2024-12-09 18:10:11.055011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.350 starting I/O failed: -6 00:20:48.350 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 [2024-12-09 18:10:11.056094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:48.351 starting I/O failed: -6 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 [2024-12-09 18:10:11.057366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 [2024-12-09 18:10:11.059254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:48.351 NVMe io qpair process completion error 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 starting I/O failed: -6 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.351 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 [2024-12-09 18:10:11.060416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 [2024-12-09 18:10:11.061482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:48.352 starting I/O failed: -6 00:20:48.352 starting I/O failed: -6 00:20:48.352 starting I/O failed: -6 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 [2024-12-09 18:10:11.062848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.352 starting I/O failed: -6 00:20:48.352 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 [2024-12-09 18:10:11.064798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:48.353 NVMe io qpair process completion error 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 [2024-12-09 18:10:11.066090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 [2024-12-09 18:10:11.067072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 Write completed with error (sct=0, sc=8) 00:20:48.353 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 [2024-12-09 18:10:11.068201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 [2024-12-09 18:10:11.070601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:48.354 NVMe io qpair process completion error 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.354 starting I/O failed: -6 00:20:48.354 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 [2024-12-09 18:10:11.071957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 [2024-12-09 18:10:11.072940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 [2024-12-09 18:10:11.074126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.355 Write completed with error (sct=0, sc=8) 00:20:48.355 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 [2024-12-09 18:10:11.076741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:48.356 NVMe io qpair process completion error 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 [2024-12-09 18:10:11.078080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 [2024-12-09 18:10:11.079175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.356 Write completed with error (sct=0, sc=8) 00:20:48.356 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 [2024-12-09 18:10:11.080273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 [2024-12-09 18:10:11.082360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:48.357 NVMe io qpair process completion error 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 [2024-12-09 18:10:11.083579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.357 Write completed with error (sct=0, sc=8) 00:20:48.357 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 [2024-12-09 18:10:11.084635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.358 starting I/O failed: -6 00:20:48.358 starting I/O failed: -6 00:20:48.358 starting I/O failed: -6 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 [2024-12-09 18:10:11.086018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:48.358 starting I/O failed: -6 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.358 Write completed with error (sct=0, sc=8) 00:20:48.358 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 [2024-12-09 18:10:11.088633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:48.359 NVMe io qpair process completion error 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 [2024-12-09 18:10:11.089922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 [2024-12-09 18:10:11.091040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.359 starting I/O failed: -6 00:20:48.359 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 [2024-12-09 18:10:11.092173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 [2024-12-09 18:10:11.094339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:48.360 NVMe io qpair process completion error 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 [2024-12-09 18:10:11.095685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.360 starting I/O failed: -6 00:20:48.360 starting I/O failed: -6 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 starting I/O failed: -6 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.360 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 [2024-12-09 18:10:11.096811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 [2024-12-09 18:10:11.097982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.361 Write completed with error (sct=0, sc=8) 00:20:48.361 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 [2024-12-09 18:10:11.100102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:48.362 NVMe io qpair process completion error 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 [2024-12-09 18:10:11.101239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.362 starting I/O failed: -6 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 [2024-12-09 18:10:11.102305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 Write completed with error (sct=0, sc=8) 00:20:48.362 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 [2024-12-09 18:10:11.103527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 Write completed with error (sct=0, sc=8) 00:20:48.363 starting I/O failed: -6 00:20:48.363 [2024-12-09 18:10:11.107774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:48.363 NVMe io qpair process completion error 00:20:48.363 Initializing NVMe Controllers 00:20:48.363 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:20:48.363 Controller IO queue size 128, less than required. 00:20:48.363 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:48.363 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:20:48.363 Controller IO queue size 128, less than required. 00:20:48.363 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:48.363 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:20:48.363 Controller IO queue size 128, less than required. 00:20:48.363 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:48.363 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:20:48.363 Controller IO queue size 128, less than required. 00:20:48.363 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:48.363 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:20:48.363 Controller IO queue size 128, less than required. 00:20:48.363 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:48.363 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:20:48.363 Controller IO queue size 128, less than required. 00:20:48.363 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:48.363 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:20:48.363 Controller IO queue size 128, less than required. 00:20:48.363 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:48.363 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:20:48.363 Controller IO queue size 128, less than required. 00:20:48.363 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:48.363 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:20:48.363 Controller IO queue size 128, less than required. 00:20:48.363 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:48.363 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:48.363 Controller IO queue size 128, less than required. 00:20:48.363 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:48.363 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:20:48.363 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:20:48.363 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:20:48.363 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:20:48.363 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:20:48.363 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:20:48.363 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:20:48.363 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:20:48.363 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:20:48.363 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:48.363 Initialization complete. Launching workers. 00:20:48.363 ======================================================== 00:20:48.363 Latency(us) 00:20:48.363 Device Information : IOPS MiB/s Average min max 00:20:48.363 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1951.79 83.87 65599.31 862.24 120677.89 00:20:48.363 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1810.00 77.77 70773.02 1130.70 127418.28 00:20:48.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1792.49 77.02 71495.67 884.94 130170.83 00:20:48.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1768.14 75.97 72517.68 896.48 133688.53 00:20:48.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1805.30 77.57 71057.18 829.40 136742.35 00:20:48.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1838.40 78.99 69807.83 862.71 118806.69 00:20:48.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1856.12 79.76 68326.20 878.82 115305.41 00:20:48.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1777.11 76.36 71386.38 961.59 115803.13 00:20:48.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1803.59 77.50 70364.81 858.96 118407.69 00:20:48.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1803.17 77.48 70407.79 874.81 116008.58 00:20:48.364 ======================================================== 00:20:48.364 Total : 18206.12 782.29 70123.48 829.40 136742.35 00:20:48.364 00:20:48.364 [2024-12-09 18:10:11.114049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf42c0 is same with the state(6) to be set 00:20:48.364 [2024-12-09 18:10:11.114150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf36b0 is same with the state(6) to be set 00:20:48.364 [2024-12-09 18:10:11.114211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf39e0 is same with the state(6) to be set 00:20:48.364 [2024-12-09 18:10:11.114268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf4920 is same with the state(6) to be set 00:20:48.364 [2024-12-09 18:10:11.114325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf45f0 is same with the state(6) to be set 00:20:48.364 [2024-12-09 18:10:11.114381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf4c50 is same with the state(6) to be set 00:20:48.364 [2024-12-09 18:10:11.114436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5ae0 is same with the state(6) to be set 00:20:48.364 [2024-12-09 18:10:11.114490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3d10 is same with the state(6) to be set 00:20:48.364 [2024-12-09 18:10:11.114554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5900 is same with the state(6) to be set 00:20:48.364 [2024-12-09 18:10:11.114628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5720 is same with the state(6) to be set 00:20:48.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:20:48.624 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1518384 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1518384 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1518384 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:49.561 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:49.561 rmmod nvme_tcp 00:20:49.820 rmmod nvme_fabrics 00:20:49.820 rmmod nvme_keyring 00:20:49.820 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:49.820 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:20:49.820 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:20:49.820 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1518215 ']' 00:20:49.820 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1518215 00:20:49.820 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1518215 ']' 00:20:49.820 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1518215 00:20:49.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1518215) - No such process 00:20:49.820 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1518215 is not found' 00:20:49.820 Process with pid 1518215 is not found 00:20:49.820 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:49.820 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:49.820 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:49.820 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:20:49.820 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:20:49.820 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:49.820 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:20:49.820 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:49.820 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:49.820 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.820 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.820 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.724 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:51.724 00:20:51.724 real 0m9.895s 00:20:51.724 user 0m24.066s 00:20:51.724 sys 0m5.577s 00:20:51.724 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.724 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:51.724 ************************************ 00:20:51.724 END TEST nvmf_shutdown_tc4 00:20:51.724 ************************************ 00:20:51.724 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:20:51.724 00:20:51.724 real 0m37.573s 00:20:51.724 user 1m42.263s 00:20:51.724 sys 0m11.906s 00:20:51.724 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.724 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:51.724 ************************************ 00:20:51.725 END TEST nvmf_shutdown 00:20:51.725 ************************************ 00:20:51.725 18:10:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:51.725 18:10:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:51.725 18:10:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.725 18:10:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:51.983 ************************************ 00:20:51.983 START TEST nvmf_nsid 00:20:51.983 ************************************ 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:51.983 * Looking for test storage... 00:20:51.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:51.983 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:51.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.983 --rc genhtml_branch_coverage=1 00:20:51.983 --rc genhtml_function_coverage=1 00:20:51.983 --rc genhtml_legend=1 00:20:51.983 --rc geninfo_all_blocks=1 00:20:51.984 --rc geninfo_unexecuted_blocks=1 00:20:51.984 00:20:51.984 ' 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:51.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.984 --rc genhtml_branch_coverage=1 00:20:51.984 --rc genhtml_function_coverage=1 00:20:51.984 --rc genhtml_legend=1 00:20:51.984 --rc geninfo_all_blocks=1 00:20:51.984 --rc geninfo_unexecuted_blocks=1 00:20:51.984 00:20:51.984 ' 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:51.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.984 --rc genhtml_branch_coverage=1 00:20:51.984 --rc genhtml_function_coverage=1 00:20:51.984 --rc genhtml_legend=1 00:20:51.984 --rc geninfo_all_blocks=1 00:20:51.984 --rc geninfo_unexecuted_blocks=1 00:20:51.984 00:20:51.984 ' 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:51.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.984 --rc genhtml_branch_coverage=1 00:20:51.984 --rc genhtml_function_coverage=1 00:20:51.984 --rc genhtml_legend=1 00:20:51.984 --rc geninfo_all_blocks=1 00:20:51.984 --rc geninfo_unexecuted_blocks=1 00:20:51.984 00:20:51.984 ' 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:51.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:20:51.984 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:54.514 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:54.514 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:54.514 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:54.514 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:54.514 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:54.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:54.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:20:54.514 00:20:54.514 --- 10.0.0.2 ping statistics --- 00:20:54.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.515 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:54.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:54.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:20:54.515 00:20:54.515 --- 10.0.0.1 ping statistics --- 00:20:54.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.515 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1521133 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1521133 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1521133 ']' 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:54.515 [2024-12-09 18:10:17.310831] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:20:54.515 [2024-12-09 18:10:17.310916] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.515 [2024-12-09 18:10:17.381184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.515 [2024-12-09 18:10:17.437853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.515 [2024-12-09 18:10:17.437919] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.515 [2024-12-09 18:10:17.437943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.515 [2024-12-09 18:10:17.437976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.515 [2024-12-09 18:10:17.437986] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.515 [2024-12-09 18:10:17.438600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:54.515 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1521156 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=8789d363-6eae-4f88-b18c-6c24c223a5ec 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=483659f9-9588-402a-8f3a-a6b7bf245cad 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=dc62e905-02c6-4b03-aa5f-a0e3ff5ac687 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.773 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:54.773 null0 00:20:54.773 null1 00:20:54.773 null2 00:20:54.773 [2024-12-09 18:10:17.610918] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.773 [2024-12-09 18:10:17.623022] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:20:54.774 [2024-12-09 18:10:17.623099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1521156 ] 00:20:54.774 [2024-12-09 18:10:17.635103] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.774 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.774 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1521156 /var/tmp/tgt2.sock 00:20:54.774 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1521156 ']' 00:20:54.774 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:20:54.774 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.774 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:20:54.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:20:54.774 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.774 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:54.774 [2024-12-09 18:10:17.690633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.774 [2024-12-09 18:10:17.747329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.031 18:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.031 18:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:55.031 18:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:20:55.601 [2024-12-09 18:10:18.389123] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.601 [2024-12-09 18:10:18.405303] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:20:55.601 nvme0n1 nvme0n2 00:20:55.601 nvme1n1 00:20:55.601 18:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:20:55.601 18:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:20:55.601 18:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.168 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:20:56.168 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:20:56.168 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:20:56.168 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:20:56.168 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:20:56.168 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:20:56.168 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:20:56.168 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:56.168 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:56.168 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:56.168 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:20:56.168 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:20:56.168 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 8789d363-6eae-4f88-b18c-6c24c223a5ec 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8789d3636eae4f88b18c6c24c223a5ec 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8789D3636EAE4F88B18C6C24C223A5EC 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 8789D3636EAE4F88B18C6C24C223A5EC == \8\7\8\9\D\3\6\3\6\E\A\E\4\F\8\8\B\1\8\C\6\C\2\4\C\2\2\3\A\5\E\C ]] 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 483659f9-9588-402a-8f3a-a6b7bf245cad 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:20:57.105 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=483659f99588402a8f3aa6b7bf245cad 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 483659F99588402A8F3AA6B7BF245CAD 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 483659F99588402A8F3AA6B7BF245CAD == \4\8\3\6\5\9\F\9\9\5\8\8\4\0\2\A\8\F\3\A\A\6\B\7\B\F\2\4\5\C\A\D ]] 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid dc62e905-02c6-4b03-aa5f-a0e3ff5ac687 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=dc62e90502c64b03aa5fa0e3ff5ac687 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DC62E90502C64B03AA5FA0E3FF5AC687 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ DC62E90502C64B03AA5FA0E3FF5AC687 == \D\C\6\2\E\9\0\5\0\2\C\6\4\B\0\3\A\A\5\F\A\0\E\3\F\F\5\A\C\6\8\7 ]] 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:20:57.364 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1521156 00:20:57.624 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1521156 ']' 00:20:57.624 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1521156 00:20:57.624 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:57.624 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.624 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1521156 00:20:57.624 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:57.624 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:57.624 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1521156' 00:20:57.624 killing process with pid 1521156 00:20:57.624 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1521156 00:20:57.624 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1521156 00:20:57.882 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:20:57.882 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:57.882 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:20:57.882 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:57.882 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:20:57.882 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:57.882 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:57.882 rmmod nvme_tcp 00:20:57.882 rmmod nvme_fabrics 00:20:57.882 rmmod nvme_keyring 00:20:58.142 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:58.142 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:20:58.142 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:20:58.142 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1521133 ']' 00:20:58.142 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1521133 00:20:58.142 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1521133 ']' 00:20:58.142 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1521133 00:20:58.142 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:58.142 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.142 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1521133 00:20:58.142 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:58.142 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:58.142 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1521133' 00:20:58.142 killing process with pid 1521133 00:20:58.142 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1521133 00:20:58.142 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1521133 00:20:58.402 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:58.402 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:58.402 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:58.402 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:20:58.402 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:20:58.402 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:58.402 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:20:58.402 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:58.402 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:58.402 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.402 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.402 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.309 18:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:00.309 00:21:00.309 real 0m8.468s 00:21:00.309 user 0m8.293s 00:21:00.309 sys 0m2.709s 00:21:00.309 18:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.309 18:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:00.309 ************************************ 00:21:00.309 END TEST nvmf_nsid 00:21:00.309 ************************************ 00:21:00.309 18:10:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:00.309 00:21:00.309 real 11m44.348s 00:21:00.309 user 27m45.344s 00:21:00.309 sys 2m45.385s 00:21:00.309 18:10:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.309 18:10:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:00.309 ************************************ 00:21:00.309 END TEST nvmf_target_extra 00:21:00.309 ************************************ 00:21:00.309 18:10:23 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:00.309 18:10:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:00.309 18:10:23 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.309 18:10:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:00.309 ************************************ 00:21:00.309 START TEST nvmf_host 00:21:00.309 ************************************ 00:21:00.309 18:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:00.568 * Looking for test storage... 00:21:00.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:00.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.568 --rc genhtml_branch_coverage=1 00:21:00.568 --rc genhtml_function_coverage=1 00:21:00.568 --rc genhtml_legend=1 00:21:00.568 --rc geninfo_all_blocks=1 00:21:00.568 --rc geninfo_unexecuted_blocks=1 00:21:00.568 00:21:00.568 ' 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:00.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.568 --rc genhtml_branch_coverage=1 00:21:00.568 --rc genhtml_function_coverage=1 00:21:00.568 --rc genhtml_legend=1 00:21:00.568 --rc geninfo_all_blocks=1 00:21:00.568 --rc geninfo_unexecuted_blocks=1 00:21:00.568 00:21:00.568 ' 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:00.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.568 --rc genhtml_branch_coverage=1 00:21:00.568 --rc genhtml_function_coverage=1 00:21:00.568 --rc genhtml_legend=1 00:21:00.568 --rc geninfo_all_blocks=1 00:21:00.568 --rc geninfo_unexecuted_blocks=1 00:21:00.568 00:21:00.568 ' 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:00.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.568 --rc genhtml_branch_coverage=1 00:21:00.568 --rc genhtml_function_coverage=1 00:21:00.568 --rc genhtml_legend=1 00:21:00.568 --rc geninfo_all_blocks=1 00:21:00.568 --rc geninfo_unexecuted_blocks=1 00:21:00.568 00:21:00.568 ' 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.568 18:10:23 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:00.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.569 ************************************ 00:21:00.569 START TEST nvmf_multicontroller 00:21:00.569 ************************************ 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:00.569 * Looking for test storage... 00:21:00.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:21:00.569 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:00.828 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:00.828 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:00.828 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:00.828 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:00.828 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:00.828 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:00.828 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:00.828 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:00.828 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:00.828 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:00.828 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:00.828 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:00.828 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:00.828 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:00.828 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:00.828 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:00.828 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:00.828 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:00.828 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:00.828 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:00.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.829 --rc genhtml_branch_coverage=1 00:21:00.829 --rc genhtml_function_coverage=1 00:21:00.829 --rc genhtml_legend=1 00:21:00.829 --rc geninfo_all_blocks=1 00:21:00.829 --rc geninfo_unexecuted_blocks=1 00:21:00.829 00:21:00.829 ' 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:00.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.829 --rc genhtml_branch_coverage=1 00:21:00.829 --rc genhtml_function_coverage=1 00:21:00.829 --rc genhtml_legend=1 00:21:00.829 --rc geninfo_all_blocks=1 00:21:00.829 --rc geninfo_unexecuted_blocks=1 00:21:00.829 00:21:00.829 ' 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:00.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.829 --rc genhtml_branch_coverage=1 00:21:00.829 --rc genhtml_function_coverage=1 00:21:00.829 --rc genhtml_legend=1 00:21:00.829 --rc geninfo_all_blocks=1 00:21:00.829 --rc geninfo_unexecuted_blocks=1 00:21:00.829 00:21:00.829 ' 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:00.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.829 --rc genhtml_branch_coverage=1 00:21:00.829 --rc genhtml_function_coverage=1 00:21:00.829 --rc genhtml_legend=1 00:21:00.829 --rc geninfo_all_blocks=1 00:21:00.829 --rc geninfo_unexecuted_blocks=1 00:21:00.829 00:21:00.829 ' 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:00.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:00.829 18:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.364 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:03.365 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:03.365 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:03.365 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:03.365 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:03.365 18:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:03.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:03.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:21:03.365 00:21:03.365 --- 10.0.0.2 ping statistics --- 00:21:03.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.365 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:03.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:03.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:21:03.365 00:21:03.365 --- 10.0.0.1 ping statistics --- 00:21:03.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.365 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1523711 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1523711 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1523711 ']' 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:03.365 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.366 [2024-12-09 18:10:26.157091] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:21:03.366 [2024-12-09 18:10:26.157164] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.366 [2024-12-09 18:10:26.230748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:03.366 [2024-12-09 18:10:26.288031] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.366 [2024-12-09 18:10:26.288083] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.366 [2024-12-09 18:10:26.288111] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.366 [2024-12-09 18:10:26.288121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.366 [2024-12-09 18:10:26.288130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.366 [2024-12-09 18:10:26.289698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.366 [2024-12-09 18:10:26.289752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:03.366 [2024-12-09 18:10:26.289755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.366 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.624 [2024-12-09 18:10:26.425227] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.624 Malloc0 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.624 [2024-12-09 18:10:26.481322] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.624 [2024-12-09 18:10:26.489187] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.624 Malloc1 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.624 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.625 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1523739 00:21:03.625 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:03.625 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1523739 /var/tmp/bdevperf.sock 00:21:03.625 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:03.625 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1523739 ']' 00:21:03.625 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:03.625 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:03.625 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:03.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:03.625 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:03.625 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.884 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.884 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:03.884 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:03.884 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.884 18:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.145 NVMe0n1 00:21:04.145 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.145 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:04.145 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:04.145 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.145 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.145 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.145 1 00:21:04.145 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:04.145 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:04.145 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:04.145 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:04.145 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:04.145 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:04.145 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:04.145 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:04.145 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.145 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.145 request: 00:21:04.145 { 00:21:04.145 "name": "NVMe0", 00:21:04.145 "trtype": "tcp", 00:21:04.145 "traddr": "10.0.0.2", 00:21:04.145 "adrfam": "ipv4", 00:21:04.145 "trsvcid": "4420", 00:21:04.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.145 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:04.145 "hostaddr": "10.0.0.1", 00:21:04.145 "prchk_reftag": false, 00:21:04.145 "prchk_guard": false, 00:21:04.145 "hdgst": false, 00:21:04.145 "ddgst": false, 00:21:04.145 "allow_unrecognized_csi": false, 00:21:04.145 "method": "bdev_nvme_attach_controller", 00:21:04.145 "req_id": 1 00:21:04.145 } 00:21:04.145 Got JSON-RPC error response 00:21:04.145 response: 00:21:04.145 { 00:21:04.146 "code": -114, 00:21:04.146 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:04.146 } 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.146 request: 00:21:04.146 { 00:21:04.146 "name": "NVMe0", 00:21:04.146 "trtype": "tcp", 00:21:04.146 "traddr": "10.0.0.2", 00:21:04.146 "adrfam": "ipv4", 00:21:04.146 "trsvcid": "4420", 00:21:04.146 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:04.146 "hostaddr": "10.0.0.1", 00:21:04.146 "prchk_reftag": false, 00:21:04.146 "prchk_guard": false, 00:21:04.146 "hdgst": false, 00:21:04.146 "ddgst": false, 00:21:04.146 "allow_unrecognized_csi": false, 00:21:04.146 "method": "bdev_nvme_attach_controller", 00:21:04.146 "req_id": 1 00:21:04.146 } 00:21:04.146 Got JSON-RPC error response 00:21:04.146 response: 00:21:04.146 { 00:21:04.146 "code": -114, 00:21:04.146 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:04.146 } 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.146 request: 00:21:04.146 { 00:21:04.146 "name": "NVMe0", 00:21:04.146 "trtype": "tcp", 00:21:04.146 "traddr": "10.0.0.2", 00:21:04.146 "adrfam": "ipv4", 00:21:04.146 "trsvcid": "4420", 00:21:04.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.146 "hostaddr": "10.0.0.1", 00:21:04.146 "prchk_reftag": false, 00:21:04.146 "prchk_guard": false, 00:21:04.146 "hdgst": false, 00:21:04.146 "ddgst": false, 00:21:04.146 "multipath": "disable", 00:21:04.146 "allow_unrecognized_csi": false, 00:21:04.146 "method": "bdev_nvme_attach_controller", 00:21:04.146 "req_id": 1 00:21:04.146 } 00:21:04.146 Got JSON-RPC error response 00:21:04.146 response: 00:21:04.146 { 00:21:04.146 "code": -114, 00:21:04.146 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:04.146 } 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.146 request: 00:21:04.146 { 00:21:04.146 "name": "NVMe0", 00:21:04.146 "trtype": "tcp", 00:21:04.146 "traddr": "10.0.0.2", 00:21:04.146 "adrfam": "ipv4", 00:21:04.146 "trsvcid": "4420", 00:21:04.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.146 "hostaddr": "10.0.0.1", 00:21:04.146 "prchk_reftag": false, 00:21:04.146 "prchk_guard": false, 00:21:04.146 "hdgst": false, 00:21:04.146 "ddgst": false, 00:21:04.146 "multipath": "failover", 00:21:04.146 "allow_unrecognized_csi": false, 00:21:04.146 "method": "bdev_nvme_attach_controller", 00:21:04.146 "req_id": 1 00:21:04.146 } 00:21:04.146 Got JSON-RPC error response 00:21:04.146 response: 00:21:04.146 { 00:21:04.146 "code": -114, 00:21:04.146 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:04.146 } 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.146 NVMe0n1 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.146 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.406 00:21:04.406 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.406 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:04.407 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:04.407 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.407 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.407 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.407 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:04.407 18:10:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:05.784 { 00:21:05.784 "results": [ 00:21:05.784 { 00:21:05.784 "job": "NVMe0n1", 00:21:05.784 "core_mask": "0x1", 00:21:05.784 "workload": "write", 00:21:05.784 "status": "finished", 00:21:05.784 "queue_depth": 128, 00:21:05.784 "io_size": 4096, 00:21:05.784 "runtime": 1.006621, 00:21:05.784 "iops": 18295.863090477946, 00:21:05.784 "mibps": 71.46821519717948, 00:21:05.784 "io_failed": 0, 00:21:05.784 "io_timeout": 0, 00:21:05.784 "avg_latency_us": 6984.123200183406, 00:21:05.784 "min_latency_us": 6068.148148148148, 00:21:05.784 "max_latency_us": 16505.36296296296 00:21:05.784 } 00:21:05.784 ], 00:21:05.784 "core_count": 1 00:21:05.784 } 00:21:05.784 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:05.784 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.784 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.784 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.784 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:05.784 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1523739 00:21:05.784 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1523739 ']' 00:21:05.784 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1523739 00:21:05.784 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:05.784 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.784 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1523739 00:21:05.784 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:05.784 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1523739' 00:21:05.785 killing process with pid 1523739 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1523739 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1523739 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:05.785 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:05.785 [2024-12-09 18:10:26.594795] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:21:05.785 [2024-12-09 18:10:26.594893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523739 ] 00:21:05.785 [2024-12-09 18:10:26.663380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.785 [2024-12-09 18:10:26.722412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.785 [2024-12-09 18:10:27.241369] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name d8cd988c-f7da-4df0-86e0-c80b87e58737 already exists 00:21:05.785 [2024-12-09 18:10:27.241409] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:d8cd988c-f7da-4df0-86e0-c80b87e58737 alias for bdev NVMe1n1 00:21:05.785 [2024-12-09 18:10:27.241423] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:05.785 Running I/O for 1 seconds... 00:21:05.785 18289.00 IOPS, 71.44 MiB/s 00:21:05.785 Latency(us) 00:21:05.785 [2024-12-09T17:10:28.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.785 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:05.785 NVMe0n1 : 1.01 18295.86 71.47 0.00 0.00 6984.12 6068.15 16505.36 00:21:05.785 [2024-12-09T17:10:28.826Z] =================================================================================================================== 00:21:05.785 [2024-12-09T17:10:28.826Z] Total : 18295.86 71.47 0.00 0.00 6984.12 6068.15 16505.36 00:21:05.785 Received shutdown signal, test time was about 1.000000 seconds 00:21:05.785 00:21:05.785 Latency(us) 00:21:05.785 [2024-12-09T17:10:28.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.785 [2024-12-09T17:10:28.826Z] =================================================================================================================== 00:21:05.785 [2024-12-09T17:10:28.826Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:05.785 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:05.785 rmmod nvme_tcp 00:21:05.785 rmmod nvme_fabrics 00:21:05.785 rmmod nvme_keyring 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1523711 ']' 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1523711 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1523711 ']' 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1523711 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1523711 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1523711' 00:21:05.785 killing process with pid 1523711 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1523711 00:21:05.785 18:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1523711 00:21:06.354 18:10:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:06.354 18:10:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:06.354 18:10:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:06.354 18:10:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:06.354 18:10:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:06.354 18:10:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:06.354 18:10:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:06.354 18:10:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:06.354 18:10:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:06.354 18:10:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.354 18:10:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:06.354 18:10:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.316 18:10:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:08.316 00:21:08.316 real 0m7.605s 00:21:08.316 user 0m11.507s 00:21:08.316 sys 0m2.409s 00:21:08.316 18:10:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:08.316 18:10:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:08.316 ************************************ 00:21:08.316 END TEST nvmf_multicontroller 00:21:08.316 ************************************ 00:21:08.316 18:10:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:08.316 18:10:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:08.316 18:10:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:08.316 18:10:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.316 ************************************ 00:21:08.316 START TEST nvmf_aer 00:21:08.316 ************************************ 00:21:08.316 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:08.316 * Looking for test storage... 00:21:08.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:08.316 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:08.316 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:21:08.316 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:08.316 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:08.316 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:08.316 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:08.316 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:08.316 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:08.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.317 --rc genhtml_branch_coverage=1 00:21:08.317 --rc genhtml_function_coverage=1 00:21:08.317 --rc genhtml_legend=1 00:21:08.317 --rc geninfo_all_blocks=1 00:21:08.317 --rc geninfo_unexecuted_blocks=1 00:21:08.317 00:21:08.317 ' 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:08.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.317 --rc genhtml_branch_coverage=1 00:21:08.317 --rc genhtml_function_coverage=1 00:21:08.317 --rc genhtml_legend=1 00:21:08.317 --rc geninfo_all_blocks=1 00:21:08.317 --rc geninfo_unexecuted_blocks=1 00:21:08.317 00:21:08.317 ' 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:08.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.317 --rc genhtml_branch_coverage=1 00:21:08.317 --rc genhtml_function_coverage=1 00:21:08.317 --rc genhtml_legend=1 00:21:08.317 --rc geninfo_all_blocks=1 00:21:08.317 --rc geninfo_unexecuted_blocks=1 00:21:08.317 00:21:08.317 ' 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:08.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.317 --rc genhtml_branch_coverage=1 00:21:08.317 --rc genhtml_function_coverage=1 00:21:08.317 --rc genhtml_legend=1 00:21:08.317 --rc geninfo_all_blocks=1 00:21:08.317 --rc geninfo_unexecuted_blocks=1 00:21:08.317 00:21:08.317 ' 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:08.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:08.317 18:10:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:10.851 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.851 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:10.851 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:10.851 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:10.851 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:10.851 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:10.851 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:10.851 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:10.851 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:10.851 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:10.851 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:10.851 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:10.852 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:10.852 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:10.852 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:10.852 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:10.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:21:10.852 00:21:10.852 --- 10.0.0.2 ping statistics --- 00:21:10.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.852 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:21:10.852 00:21:10.852 --- 10.0.0.1 ping statistics --- 00:21:10.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.852 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1525962 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1525962 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1525962 ']' 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.852 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:10.852 [2024-12-09 18:10:33.613116] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:21:10.853 [2024-12-09 18:10:33.613192] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.853 [2024-12-09 18:10:33.685430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:10.853 [2024-12-09 18:10:33.739853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.853 [2024-12-09 18:10:33.739909] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.853 [2024-12-09 18:10:33.739938] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.853 [2024-12-09 18:10:33.739949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.853 [2024-12-09 18:10:33.739959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.853 [2024-12-09 18:10:33.741659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.853 [2024-12-09 18:10:33.741726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.853 [2024-12-09 18:10:33.741813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:10.853 [2024-12-09 18:10:33.741821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.853 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.853 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:10.853 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:10.853 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:10.853 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:11.113 [2024-12-09 18:10:33.895952] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:11.113 Malloc0 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:11.113 [2024-12-09 18:10:33.959567] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.113 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:11.113 [ 00:21:11.113 { 00:21:11.113 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:11.113 "subtype": "Discovery", 00:21:11.113 "listen_addresses": [], 00:21:11.113 "allow_any_host": true, 00:21:11.113 "hosts": [] 00:21:11.113 }, 00:21:11.113 { 00:21:11.113 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:11.113 "subtype": "NVMe", 00:21:11.113 "listen_addresses": [ 00:21:11.113 { 00:21:11.113 "trtype": "TCP", 00:21:11.113 "adrfam": "IPv4", 00:21:11.113 "traddr": "10.0.0.2", 00:21:11.113 "trsvcid": "4420" 00:21:11.113 } 00:21:11.113 ], 00:21:11.113 "allow_any_host": true, 00:21:11.113 "hosts": [], 00:21:11.113 "serial_number": "SPDK00000000000001", 00:21:11.113 "model_number": "SPDK bdev Controller", 00:21:11.113 "max_namespaces": 2, 00:21:11.113 "min_cntlid": 1, 00:21:11.113 "max_cntlid": 65519, 00:21:11.113 "namespaces": [ 00:21:11.113 { 00:21:11.114 "nsid": 1, 00:21:11.114 "bdev_name": "Malloc0", 00:21:11.114 "name": "Malloc0", 00:21:11.114 "nguid": "28AC35CF66B549209BFB8A70AA13AFEF", 00:21:11.114 "uuid": "28ac35cf-66b5-4920-9bfb-8a70aa13afef" 00:21:11.114 } 00:21:11.114 ] 00:21:11.114 } 00:21:11.114 ] 00:21:11.114 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.114 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:11.114 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:11.114 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1526055 00:21:11.114 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:11.114 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:11.114 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:11.114 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:11.114 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:11.114 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:11.114 18:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:11.114 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:11.114 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:11.114 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:11.114 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:11.373 Malloc1 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:11.373 [ 00:21:11.373 { 00:21:11.373 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:11.373 "subtype": "Discovery", 00:21:11.373 "listen_addresses": [], 00:21:11.373 "allow_any_host": true, 00:21:11.373 "hosts": [] 00:21:11.373 }, 00:21:11.373 { 00:21:11.373 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:11.373 "subtype": "NVMe", 00:21:11.373 "listen_addresses": [ 00:21:11.373 { 00:21:11.373 "trtype": "TCP", 00:21:11.373 "adrfam": "IPv4", 00:21:11.373 "traddr": "10.0.0.2", 00:21:11.373 "trsvcid": "4420" 00:21:11.373 } 00:21:11.373 ], 00:21:11.373 "allow_any_host": true, 00:21:11.373 "hosts": [], 00:21:11.373 "serial_number": "SPDK00000000000001", 00:21:11.373 "model_number": "SPDK bdev Controller", 00:21:11.373 "max_namespaces": 2, 00:21:11.373 "min_cntlid": 1, 00:21:11.373 "max_cntlid": 65519, 00:21:11.373 "namespaces": [ 00:21:11.373 { 00:21:11.373 "nsid": 1, 00:21:11.373 "bdev_name": "Malloc0", 00:21:11.373 "name": "Malloc0", 00:21:11.373 "nguid": "28AC35CF66B549209BFB8A70AA13AFEF", 00:21:11.373 "uuid": "28ac35cf-66b5-4920-9bfb-8a70aa13afef" 00:21:11.373 }, 00:21:11.373 { 00:21:11.373 "nsid": 2, 00:21:11.373 "bdev_name": "Malloc1", 00:21:11.373 "name": "Malloc1", 00:21:11.373 "nguid": "1487CDC772154FC3B84E743EE1B979E2", 00:21:11.373 "uuid": "1487cdc7-7215-4fc3-b84e-743ee1b979e2" 00:21:11.373 } 00:21:11.373 ] 00:21:11.373 } 00:21:11.373 ] 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1526055 00:21:11.373 Asynchronous Event Request test 00:21:11.373 Attaching to 10.0.0.2 00:21:11.373 Attached to 10.0.0.2 00:21:11.373 Registering asynchronous event callbacks... 00:21:11.373 Starting namespace attribute notice tests for all controllers... 00:21:11.373 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:11.373 aer_cb - Changed Namespace 00:21:11.373 Cleaning up... 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:11.373 rmmod nvme_tcp 00:21:11.373 rmmod nvme_fabrics 00:21:11.373 rmmod nvme_keyring 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1525962 ']' 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1525962 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1525962 ']' 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1525962 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.373 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1525962 00:21:11.632 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:11.632 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:11.632 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1525962' 00:21:11.632 killing process with pid 1525962 00:21:11.632 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1525962 00:21:11.632 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1525962 00:21:11.632 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:11.632 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:11.632 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:11.632 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:11.632 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:11.632 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:11.632 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:11.632 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:11.632 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:11.632 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.632 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.632 18:10:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:14.176 00:21:14.176 real 0m5.495s 00:21:14.176 user 0m4.398s 00:21:14.176 sys 0m1.945s 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:14.176 ************************************ 00:21:14.176 END TEST nvmf_aer 00:21:14.176 ************************************ 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.176 ************************************ 00:21:14.176 START TEST nvmf_async_init 00:21:14.176 ************************************ 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:14.176 * Looking for test storage... 00:21:14.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:14.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.176 --rc genhtml_branch_coverage=1 00:21:14.176 --rc genhtml_function_coverage=1 00:21:14.176 --rc genhtml_legend=1 00:21:14.176 --rc geninfo_all_blocks=1 00:21:14.176 --rc geninfo_unexecuted_blocks=1 00:21:14.176 00:21:14.176 ' 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:14.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.176 --rc genhtml_branch_coverage=1 00:21:14.176 --rc genhtml_function_coverage=1 00:21:14.176 --rc genhtml_legend=1 00:21:14.176 --rc geninfo_all_blocks=1 00:21:14.176 --rc geninfo_unexecuted_blocks=1 00:21:14.176 00:21:14.176 ' 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:14.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.176 --rc genhtml_branch_coverage=1 00:21:14.176 --rc genhtml_function_coverage=1 00:21:14.176 --rc genhtml_legend=1 00:21:14.176 --rc geninfo_all_blocks=1 00:21:14.176 --rc geninfo_unexecuted_blocks=1 00:21:14.176 00:21:14.176 ' 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:14.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.176 --rc genhtml_branch_coverage=1 00:21:14.176 --rc genhtml_function_coverage=1 00:21:14.176 --rc genhtml_legend=1 00:21:14.176 --rc geninfo_all_blocks=1 00:21:14.176 --rc geninfo_unexecuted_blocks=1 00:21:14.176 00:21:14.176 ' 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.176 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:14.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=0c9edcc074f14a92a5655ef9712cd8bd 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:14.177 18:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:16.079 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:16.079 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:16.079 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:16.080 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:16.080 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:16.080 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.080 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:16.080 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.080 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:16.080 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:16.080 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.080 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:16.080 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:16.080 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.080 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:16.080 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.080 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:16.080 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.080 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:16.080 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:16.080 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.080 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:16.080 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:16.080 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.080 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:16.080 18:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:16.080 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:16.340 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:16.340 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:16.340 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:16.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:21:16.340 00:21:16.340 --- 10.0.0.2 ping statistics --- 00:21:16.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.340 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:21:16.340 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:16.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:21:16.340 00:21:16.340 --- 10.0.0.1 ping statistics --- 00:21:16.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.340 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:21:16.340 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.340 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:16.340 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:16.340 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.340 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:16.340 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:16.340 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.341 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:16.341 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:16.341 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:16.341 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:16.341 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.341 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:16.341 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1528045 00:21:16.341 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:16.341 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1528045 00:21:16.341 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1528045 ']' 00:21:16.341 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.341 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.341 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.341 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.341 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:16.341 [2024-12-09 18:10:39.215216] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:21:16.341 [2024-12-09 18:10:39.215310] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.341 [2024-12-09 18:10:39.288325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.341 [2024-12-09 18:10:39.346516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.341 [2024-12-09 18:10:39.346586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.341 [2024-12-09 18:10:39.346617] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.341 [2024-12-09 18:10:39.346629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.341 [2024-12-09 18:10:39.346638] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.341 [2024-12-09 18:10:39.347311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.599 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.599 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:16.599 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:16.599 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:16.599 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:16.599 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.599 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:16.599 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.599 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:16.599 [2024-12-09 18:10:39.492426] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.599 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.599 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:16.599 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.599 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:16.599 null0 00:21:16.599 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.599 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:16.599 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.599 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:16.600 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.600 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:16.600 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.600 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:16.600 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.600 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0c9edcc074f14a92a5655ef9712cd8bd 00:21:16.600 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.600 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:16.600 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.600 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:16.600 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.600 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:16.600 [2024-12-09 18:10:39.532738] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.600 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.600 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:16.600 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.600 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:16.858 nvme0n1 00:21:16.858 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.858 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:16.858 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.858 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:16.858 [ 00:21:16.858 { 00:21:16.858 "name": "nvme0n1", 00:21:16.858 "aliases": [ 00:21:16.858 "0c9edcc0-74f1-4a92-a565-5ef9712cd8bd" 00:21:16.858 ], 00:21:16.858 "product_name": "NVMe disk", 00:21:16.858 "block_size": 512, 00:21:16.858 "num_blocks": 2097152, 00:21:16.858 "uuid": "0c9edcc0-74f1-4a92-a565-5ef9712cd8bd", 00:21:16.858 "numa_id": 0, 00:21:16.858 "assigned_rate_limits": { 00:21:16.858 "rw_ios_per_sec": 0, 00:21:16.858 "rw_mbytes_per_sec": 0, 00:21:16.858 "r_mbytes_per_sec": 0, 00:21:16.858 "w_mbytes_per_sec": 0 00:21:16.858 }, 00:21:16.858 "claimed": false, 00:21:16.858 "zoned": false, 00:21:16.858 "supported_io_types": { 00:21:16.858 "read": true, 00:21:16.858 "write": true, 00:21:16.858 "unmap": false, 00:21:16.858 "flush": true, 00:21:16.858 "reset": true, 00:21:16.858 "nvme_admin": true, 00:21:16.858 "nvme_io": true, 00:21:16.858 "nvme_io_md": false, 00:21:16.858 "write_zeroes": true, 00:21:16.858 "zcopy": false, 00:21:16.858 "get_zone_info": false, 00:21:16.858 "zone_management": false, 00:21:16.858 "zone_append": false, 00:21:16.858 "compare": true, 00:21:16.858 "compare_and_write": true, 00:21:16.858 "abort": true, 00:21:16.858 "seek_hole": false, 00:21:16.858 "seek_data": false, 00:21:16.858 "copy": true, 00:21:16.858 "nvme_iov_md": false 00:21:16.858 }, 00:21:16.858 "memory_domains": [ 00:21:16.858 { 00:21:16.858 "dma_device_id": "system", 00:21:16.858 "dma_device_type": 1 00:21:16.858 } 00:21:16.858 ], 00:21:16.858 "driver_specific": { 00:21:16.858 "nvme": [ 00:21:16.858 { 00:21:16.858 "trid": { 00:21:16.858 "trtype": "TCP", 00:21:16.858 "adrfam": "IPv4", 00:21:16.858 "traddr": "10.0.0.2", 00:21:16.858 "trsvcid": "4420", 00:21:16.858 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:16.858 }, 00:21:16.858 "ctrlr_data": { 00:21:16.858 "cntlid": 1, 00:21:16.858 "vendor_id": "0x8086", 00:21:16.858 "model_number": "SPDK bdev Controller", 00:21:16.858 "serial_number": "00000000000000000000", 00:21:16.858 "firmware_revision": "25.01", 00:21:16.858 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:16.858 "oacs": { 00:21:16.858 "security": 0, 00:21:16.858 "format": 0, 00:21:16.858 "firmware": 0, 00:21:16.858 "ns_manage": 0 00:21:16.858 }, 00:21:16.858 "multi_ctrlr": true, 00:21:16.858 "ana_reporting": false 00:21:16.858 }, 00:21:16.858 "vs": { 00:21:16.858 "nvme_version": "1.3" 00:21:16.858 }, 00:21:16.858 "ns_data": { 00:21:16.858 "id": 1, 00:21:16.858 "can_share": true 00:21:16.858 } 00:21:16.858 } 00:21:16.858 ], 00:21:16.858 "mp_policy": "active_passive" 00:21:16.858 } 00:21:16.858 } 00:21:16.858 ] 00:21:16.858 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.858 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:16.858 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.858 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:16.858 [2024-12-09 18:10:39.781805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:16.858 [2024-12-09 18:10:39.781918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc1740 (9): Bad file descriptor 00:21:17.117 [2024-12-09 18:10:39.913669] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:17.117 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.117 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:17.117 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.117 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:17.117 [ 00:21:17.117 { 00:21:17.117 "name": "nvme0n1", 00:21:17.117 "aliases": [ 00:21:17.117 "0c9edcc0-74f1-4a92-a565-5ef9712cd8bd" 00:21:17.117 ], 00:21:17.117 "product_name": "NVMe disk", 00:21:17.117 "block_size": 512, 00:21:17.117 "num_blocks": 2097152, 00:21:17.117 "uuid": "0c9edcc0-74f1-4a92-a565-5ef9712cd8bd", 00:21:17.117 "numa_id": 0, 00:21:17.117 "assigned_rate_limits": { 00:21:17.117 "rw_ios_per_sec": 0, 00:21:17.117 "rw_mbytes_per_sec": 0, 00:21:17.117 "r_mbytes_per_sec": 0, 00:21:17.117 "w_mbytes_per_sec": 0 00:21:17.117 }, 00:21:17.117 "claimed": false, 00:21:17.117 "zoned": false, 00:21:17.117 "supported_io_types": { 00:21:17.117 "read": true, 00:21:17.117 "write": true, 00:21:17.117 "unmap": false, 00:21:17.117 "flush": true, 00:21:17.117 "reset": true, 00:21:17.117 "nvme_admin": true, 00:21:17.117 "nvme_io": true, 00:21:17.117 "nvme_io_md": false, 00:21:17.117 "write_zeroes": true, 00:21:17.117 "zcopy": false, 00:21:17.117 "get_zone_info": false, 00:21:17.117 "zone_management": false, 00:21:17.117 "zone_append": false, 00:21:17.117 "compare": true, 00:21:17.117 "compare_and_write": true, 00:21:17.117 "abort": true, 00:21:17.117 "seek_hole": false, 00:21:17.117 "seek_data": false, 00:21:17.117 "copy": true, 00:21:17.117 "nvme_iov_md": false 00:21:17.117 }, 00:21:17.117 "memory_domains": [ 00:21:17.117 { 00:21:17.117 "dma_device_id": "system", 00:21:17.117 "dma_device_type": 1 00:21:17.117 } 00:21:17.117 ], 00:21:17.117 "driver_specific": { 00:21:17.117 "nvme": [ 00:21:17.117 { 00:21:17.117 "trid": { 00:21:17.117 "trtype": "TCP", 00:21:17.117 "adrfam": "IPv4", 00:21:17.117 "traddr": "10.0.0.2", 00:21:17.117 "trsvcid": "4420", 00:21:17.117 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:17.117 }, 00:21:17.117 "ctrlr_data": { 00:21:17.117 "cntlid": 2, 00:21:17.117 "vendor_id": "0x8086", 00:21:17.117 "model_number": "SPDK bdev Controller", 00:21:17.117 "serial_number": "00000000000000000000", 00:21:17.117 "firmware_revision": "25.01", 00:21:17.117 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:17.117 "oacs": { 00:21:17.117 "security": 0, 00:21:17.117 "format": 0, 00:21:17.117 "firmware": 0, 00:21:17.117 "ns_manage": 0 00:21:17.117 }, 00:21:17.117 "multi_ctrlr": true, 00:21:17.117 "ana_reporting": false 00:21:17.117 }, 00:21:17.117 "vs": { 00:21:17.117 "nvme_version": "1.3" 00:21:17.117 }, 00:21:17.117 "ns_data": { 00:21:17.117 "id": 1, 00:21:17.117 "can_share": true 00:21:17.117 } 00:21:17.117 } 00:21:17.118 ], 00:21:17.118 "mp_policy": "active_passive" 00:21:17.118 } 00:21:17.118 } 00:21:17.118 ] 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.PjSJSXvkFs 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.PjSJSXvkFs 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.PjSJSXvkFs 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:17.118 [2024-12-09 18:10:39.966442] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:17.118 [2024-12-09 18:10:39.966589] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.118 18:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:17.118 [2024-12-09 18:10:39.982492] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:17.118 nvme0n1 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:17.118 [ 00:21:17.118 { 00:21:17.118 "name": "nvme0n1", 00:21:17.118 "aliases": [ 00:21:17.118 "0c9edcc0-74f1-4a92-a565-5ef9712cd8bd" 00:21:17.118 ], 00:21:17.118 "product_name": "NVMe disk", 00:21:17.118 "block_size": 512, 00:21:17.118 "num_blocks": 2097152, 00:21:17.118 "uuid": "0c9edcc0-74f1-4a92-a565-5ef9712cd8bd", 00:21:17.118 "numa_id": 0, 00:21:17.118 "assigned_rate_limits": { 00:21:17.118 "rw_ios_per_sec": 0, 00:21:17.118 "rw_mbytes_per_sec": 0, 00:21:17.118 "r_mbytes_per_sec": 0, 00:21:17.118 "w_mbytes_per_sec": 0 00:21:17.118 }, 00:21:17.118 "claimed": false, 00:21:17.118 "zoned": false, 00:21:17.118 "supported_io_types": { 00:21:17.118 "read": true, 00:21:17.118 "write": true, 00:21:17.118 "unmap": false, 00:21:17.118 "flush": true, 00:21:17.118 "reset": true, 00:21:17.118 "nvme_admin": true, 00:21:17.118 "nvme_io": true, 00:21:17.118 "nvme_io_md": false, 00:21:17.118 "write_zeroes": true, 00:21:17.118 "zcopy": false, 00:21:17.118 "get_zone_info": false, 00:21:17.118 "zone_management": false, 00:21:17.118 "zone_append": false, 00:21:17.118 "compare": true, 00:21:17.118 "compare_and_write": true, 00:21:17.118 "abort": true, 00:21:17.118 "seek_hole": false, 00:21:17.118 "seek_data": false, 00:21:17.118 "copy": true, 00:21:17.118 "nvme_iov_md": false 00:21:17.118 }, 00:21:17.118 "memory_domains": [ 00:21:17.118 { 00:21:17.118 "dma_device_id": "system", 00:21:17.118 "dma_device_type": 1 00:21:17.118 } 00:21:17.118 ], 00:21:17.118 "driver_specific": { 00:21:17.118 "nvme": [ 00:21:17.118 { 00:21:17.118 "trid": { 00:21:17.118 "trtype": "TCP", 00:21:17.118 "adrfam": "IPv4", 00:21:17.118 "traddr": "10.0.0.2", 00:21:17.118 "trsvcid": "4421", 00:21:17.118 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:17.118 }, 00:21:17.118 "ctrlr_data": { 00:21:17.118 "cntlid": 3, 00:21:17.118 "vendor_id": "0x8086", 00:21:17.118 "model_number": "SPDK bdev Controller", 00:21:17.118 "serial_number": "00000000000000000000", 00:21:17.118 "firmware_revision": "25.01", 00:21:17.118 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:17.118 "oacs": { 00:21:17.118 "security": 0, 00:21:17.118 "format": 0, 00:21:17.118 "firmware": 0, 00:21:17.118 "ns_manage": 0 00:21:17.118 }, 00:21:17.118 "multi_ctrlr": true, 00:21:17.118 "ana_reporting": false 00:21:17.118 }, 00:21:17.118 "vs": { 00:21:17.118 "nvme_version": "1.3" 00:21:17.118 }, 00:21:17.118 "ns_data": { 00:21:17.118 "id": 1, 00:21:17.118 "can_share": true 00:21:17.118 } 00:21:17.118 } 00:21:17.118 ], 00:21:17.118 "mp_policy": "active_passive" 00:21:17.118 } 00:21:17.118 } 00:21:17.118 ] 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.PjSJSXvkFs 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:17.118 rmmod nvme_tcp 00:21:17.118 rmmod nvme_fabrics 00:21:17.118 rmmod nvme_keyring 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1528045 ']' 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1528045 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1528045 ']' 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1528045 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.118 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1528045 00:21:17.376 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:17.376 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:17.376 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1528045' 00:21:17.376 killing process with pid 1528045 00:21:17.376 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1528045 00:21:17.376 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1528045 00:21:17.376 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:17.376 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:17.376 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:17.376 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:17.376 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:17.376 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:17.376 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:17.376 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:17.376 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:17.376 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.376 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.376 18:10:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:19.920 00:21:19.920 real 0m5.658s 00:21:19.920 user 0m2.120s 00:21:19.920 sys 0m1.969s 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:19.920 ************************************ 00:21:19.920 END TEST nvmf_async_init 00:21:19.920 ************************************ 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.920 ************************************ 00:21:19.920 START TEST dma 00:21:19.920 ************************************ 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:19.920 * Looking for test storage... 00:21:19.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:19.920 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:19.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.921 --rc genhtml_branch_coverage=1 00:21:19.921 --rc genhtml_function_coverage=1 00:21:19.921 --rc genhtml_legend=1 00:21:19.921 --rc geninfo_all_blocks=1 00:21:19.921 --rc geninfo_unexecuted_blocks=1 00:21:19.921 00:21:19.921 ' 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:19.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.921 --rc genhtml_branch_coverage=1 00:21:19.921 --rc genhtml_function_coverage=1 00:21:19.921 --rc genhtml_legend=1 00:21:19.921 --rc geninfo_all_blocks=1 00:21:19.921 --rc geninfo_unexecuted_blocks=1 00:21:19.921 00:21:19.921 ' 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:19.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.921 --rc genhtml_branch_coverage=1 00:21:19.921 --rc genhtml_function_coverage=1 00:21:19.921 --rc genhtml_legend=1 00:21:19.921 --rc geninfo_all_blocks=1 00:21:19.921 --rc geninfo_unexecuted_blocks=1 00:21:19.921 00:21:19.921 ' 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:19.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.921 --rc genhtml_branch_coverage=1 00:21:19.921 --rc genhtml_function_coverage=1 00:21:19.921 --rc genhtml_legend=1 00:21:19.921 --rc geninfo_all_blocks=1 00:21:19.921 --rc geninfo_unexecuted_blocks=1 00:21:19.921 00:21:19.921 ' 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:19.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:19.921 00:21:19.921 real 0m0.175s 00:21:19.921 user 0m0.112s 00:21:19.921 sys 0m0.072s 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:19.921 ************************************ 00:21:19.921 END TEST dma 00:21:19.921 ************************************ 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.921 ************************************ 00:21:19.921 START TEST nvmf_identify 00:21:19.921 ************************************ 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:19.921 * Looking for test storage... 00:21:19.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:19.921 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:19.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.922 --rc genhtml_branch_coverage=1 00:21:19.922 --rc genhtml_function_coverage=1 00:21:19.922 --rc genhtml_legend=1 00:21:19.922 --rc geninfo_all_blocks=1 00:21:19.922 --rc geninfo_unexecuted_blocks=1 00:21:19.922 00:21:19.922 ' 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:19.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.922 --rc genhtml_branch_coverage=1 00:21:19.922 --rc genhtml_function_coverage=1 00:21:19.922 --rc genhtml_legend=1 00:21:19.922 --rc geninfo_all_blocks=1 00:21:19.922 --rc geninfo_unexecuted_blocks=1 00:21:19.922 00:21:19.922 ' 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:19.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.922 --rc genhtml_branch_coverage=1 00:21:19.922 --rc genhtml_function_coverage=1 00:21:19.922 --rc genhtml_legend=1 00:21:19.922 --rc geninfo_all_blocks=1 00:21:19.922 --rc geninfo_unexecuted_blocks=1 00:21:19.922 00:21:19.922 ' 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:19.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.922 --rc genhtml_branch_coverage=1 00:21:19.922 --rc genhtml_function_coverage=1 00:21:19.922 --rc genhtml_legend=1 00:21:19.922 --rc geninfo_all_blocks=1 00:21:19.922 --rc geninfo_unexecuted_blocks=1 00:21:19.922 00:21:19.922 ' 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:19.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:19.922 18:10:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:21.827 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:21.827 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:21:21.827 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:21.828 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:21.828 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:21.828 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:21.828 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:21.828 18:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:22.086 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:22.086 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:22.086 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:22.086 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:22.086 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:22.086 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:22.086 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:22.086 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:22.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:21:22.086 00:21:22.086 --- 10.0.0.2 ping statistics --- 00:21:22.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.086 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:21:22.086 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:22.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:21:22.086 00:21:22.086 --- 10.0.0.1 ping statistics --- 00:21:22.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.087 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:21:22.087 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.087 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:21:22.087 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:22.087 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.087 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:22.087 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:22.087 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.087 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:22.087 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:22.087 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:22.087 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:22.087 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:22.087 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1530209 00:21:22.087 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:22.087 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:22.087 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1530209 00:21:22.087 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1530209 ']' 00:21:22.087 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.087 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:22.087 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.087 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:22.087 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:22.346 [2024-12-09 18:10:45.167395] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:21:22.346 [2024-12-09 18:10:45.167498] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.346 [2024-12-09 18:10:45.245664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:22.346 [2024-12-09 18:10:45.309362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.346 [2024-12-09 18:10:45.309430] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.346 [2024-12-09 18:10:45.309457] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.346 [2024-12-09 18:10:45.309468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.346 [2024-12-09 18:10:45.309478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.346 [2024-12-09 18:10:45.311169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.346 [2024-12-09 18:10:45.311236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.346 [2024-12-09 18:10:45.311286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:22.346 [2024-12-09 18:10:45.311290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:22.606 [2024-12-09 18:10:45.441863] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:22.606 Malloc0 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:22.606 [2024-12-09 18:10:45.527017] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:22.606 [ 00:21:22.606 { 00:21:22.606 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:22.606 "subtype": "Discovery", 00:21:22.606 "listen_addresses": [ 00:21:22.606 { 00:21:22.606 "trtype": "TCP", 00:21:22.606 "adrfam": "IPv4", 00:21:22.606 "traddr": "10.0.0.2", 00:21:22.606 "trsvcid": "4420" 00:21:22.606 } 00:21:22.606 ], 00:21:22.606 "allow_any_host": true, 00:21:22.606 "hosts": [] 00:21:22.606 }, 00:21:22.606 { 00:21:22.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.606 "subtype": "NVMe", 00:21:22.606 "listen_addresses": [ 00:21:22.606 { 00:21:22.606 "trtype": "TCP", 00:21:22.606 "adrfam": "IPv4", 00:21:22.606 "traddr": "10.0.0.2", 00:21:22.606 "trsvcid": "4420" 00:21:22.606 } 00:21:22.606 ], 00:21:22.606 "allow_any_host": true, 00:21:22.606 "hosts": [], 00:21:22.606 "serial_number": "SPDK00000000000001", 00:21:22.606 "model_number": "SPDK bdev Controller", 00:21:22.606 "max_namespaces": 32, 00:21:22.606 "min_cntlid": 1, 00:21:22.606 "max_cntlid": 65519, 00:21:22.606 "namespaces": [ 00:21:22.606 { 00:21:22.606 "nsid": 1, 00:21:22.606 "bdev_name": "Malloc0", 00:21:22.606 "name": "Malloc0", 00:21:22.606 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:22.606 "eui64": "ABCDEF0123456789", 00:21:22.606 "uuid": "dc0f8cf5-a7e9-4886-8909-f70fbaf79f68" 00:21:22.606 } 00:21:22.606 ] 00:21:22.606 } 00:21:22.606 ] 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.606 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:22.606 [2024-12-09 18:10:45.568345] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:21:22.606 [2024-12-09 18:10:45.568393] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1530350 ] 00:21:22.606 [2024-12-09 18:10:45.619809] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:22.606 [2024-12-09 18:10:45.619903] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:22.606 [2024-12-09 18:10:45.619914] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:22.606 [2024-12-09 18:10:45.619932] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:22.606 [2024-12-09 18:10:45.619945] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:22.606 [2024-12-09 18:10:45.620718] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:22.606 [2024-12-09 18:10:45.620794] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1287690 0 00:21:22.606 [2024-12-09 18:10:45.630572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:22.606 [2024-12-09 18:10:45.630595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:22.606 [2024-12-09 18:10:45.630604] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:22.606 [2024-12-09 18:10:45.630610] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:22.606 [2024-12-09 18:10:45.630659] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.606 [2024-12-09 18:10:45.630672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.606 [2024-12-09 18:10:45.630680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690) 00:21:22.606 [2024-12-09 18:10:45.630699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:22.606 [2024-12-09 18:10:45.630726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0 00:21:22.606 [2024-12-09 18:10:45.638559] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.606 [2024-12-09 18:10:45.638578] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.606 [2024-12-09 18:10:45.638586] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.606 [2024-12-09 18:10:45.638610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690 00:21:22.606 [2024-12-09 18:10:45.638632] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:22.606 [2024-12-09 18:10:45.638644] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:22.606 [2024-12-09 18:10:45.638654] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:22.606 [2024-12-09 18:10:45.638692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.606 [2024-12-09 18:10:45.638701] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.606 [2024-12-09 18:10:45.638708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690) 00:21:22.606 [2024-12-09 18:10:45.638719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.606 [2024-12-09 18:10:45.638744] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0 00:21:22.606 [2024-12-09 18:10:45.638867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.606 [2024-12-09 18:10:45.638879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.606 [2024-12-09 18:10:45.638886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.606 [2024-12-09 18:10:45.638893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690 00:21:22.606 [2024-12-09 18:10:45.638902] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:22.606 [2024-12-09 18:10:45.638914] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:22.606 [2024-12-09 18:10:45.638926] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.606 [2024-12-09 18:10:45.638934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.606 [2024-12-09 18:10:45.638940] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690) 00:21:22.606 [2024-12-09 18:10:45.638950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.606 [2024-12-09 18:10:45.638972] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0 00:21:22.606 [2024-12-09 18:10:45.639047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.606 [2024-12-09 18:10:45.639058] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.606 [2024-12-09 18:10:45.639065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.606 [2024-12-09 18:10:45.639072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690 00:21:22.606 [2024-12-09 18:10:45.639081] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:22.606 [2024-12-09 18:10:45.639095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:22.606 [2024-12-09 18:10:45.639107] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.606 [2024-12-09 18:10:45.639114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.606 [2024-12-09 18:10:45.639121] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690) 00:21:22.607 [2024-12-09 18:10:45.639131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.607 [2024-12-09 18:10:45.639151] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0 00:21:22.607 [2024-12-09 18:10:45.639221] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.607 [2024-12-09 18:10:45.639233] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.607 [2024-12-09 18:10:45.639240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.607 [2024-12-09 18:10:45.639247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690 00:21:22.607 [2024-12-09 18:10:45.639256] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:22.607 [2024-12-09 18:10:45.639273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.607 [2024-12-09 18:10:45.639282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.607 [2024-12-09 18:10:45.639288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690) 00:21:22.607 [2024-12-09 18:10:45.639299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.607 [2024-12-09 18:10:45.639319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0 00:21:22.607 [2024-12-09 18:10:45.639395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.607 [2024-12-09 18:10:45.639409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.607 [2024-12-09 18:10:45.639416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.607 [2024-12-09 18:10:45.639422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690 00:21:22.607 [2024-12-09 18:10:45.639431] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:22.607 [2024-12-09 18:10:45.639439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:22.607 [2024-12-09 18:10:45.639452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:22.607 [2024-12-09 18:10:45.639562] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:22.607 [2024-12-09 18:10:45.639574] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:22.607 [2024-12-09 18:10:45.639590] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.607 [2024-12-09 18:10:45.639598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.607 [2024-12-09 18:10:45.639604] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690) 00:21:22.607 [2024-12-09 18:10:45.639614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.607 [2024-12-09 18:10:45.639636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0 00:21:22.607 [2024-12-09 18:10:45.639756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.607 [2024-12-09 18:10:45.639768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.607 [2024-12-09 18:10:45.639774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.607 [2024-12-09 18:10:45.639781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690 00:21:22.607 [2024-12-09 18:10:45.639789] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:22.607 [2024-12-09 18:10:45.639805] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.607 [2024-12-09 18:10:45.639814] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.607 [2024-12-09 18:10:45.639820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690) 00:21:22.607 [2024-12-09 18:10:45.639830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.607 [2024-12-09 18:10:45.639858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0 00:21:22.607 [2024-12-09 18:10:45.639938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.607 [2024-12-09 18:10:45.639958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.607 [2024-12-09 18:10:45.639967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.607 [2024-12-09 18:10:45.639974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690 00:21:22.607 [2024-12-09 18:10:45.639981] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:22.607 [2024-12-09 18:10:45.639990] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:22.607 [2024-12-09 18:10:45.640004] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:22.607 [2024-12-09 18:10:45.640019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:22.607 [2024-12-09 18:10:45.640036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.607 [2024-12-09 18:10:45.640044] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690) 00:21:22.607 [2024-12-09 18:10:45.640054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.607 [2024-12-09 18:10:45.640076] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0 00:21:22.607 [2024-12-09 18:10:45.640199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:22.607 [2024-12-09 18:10:45.640215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:22.607 [2024-12-09 18:10:45.640227] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:22.607 [2024-12-09 18:10:45.640237] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287690): datao=0, datal=4096, cccid=0 00:21:22.607 [2024-12-09 18:10:45.640245] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e9100) on tqpair(0x1287690): expected_datao=0, payload_size=4096 00:21:22.607 [2024-12-09 18:10:45.640253] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.607 [2024-12-09 18:10:45.640272] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:22.607 [2024-12-09 18:10:45.640282] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:22.867 [2024-12-09 18:10:45.680645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.867 [2024-12-09 18:10:45.680666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.867 [2024-12-09 18:10:45.680673] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.867 [2024-12-09 18:10:45.680680] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690 00:21:22.867 [2024-12-09 18:10:45.680694] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:22.867 [2024-12-09 18:10:45.680709] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:22.867 [2024-12-09 18:10:45.680717] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:22.867 [2024-12-09 18:10:45.680727] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:22.867 [2024-12-09 18:10:45.680735] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:22.867 [2024-12-09 18:10:45.680743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:22.867 [2024-12-09 18:10:45.680759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:22.867 [2024-12-09 18:10:45.680777] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.867 [2024-12-09 18:10:45.680785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.867 [2024-12-09 18:10:45.680792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690) 00:21:22.867 [2024-12-09 18:10:45.680803] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:22.867 [2024-12-09 18:10:45.680827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0 00:21:22.867 [2024-12-09 18:10:45.680922] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.867 [2024-12-09 18:10:45.680935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.867 [2024-12-09 18:10:45.680942] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.867 [2024-12-09 18:10:45.680949] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690 00:21:22.867 [2024-12-09 18:10:45.680961] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.867 [2024-12-09 18:10:45.680969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.867 [2024-12-09 18:10:45.680975] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690) 00:21:22.867 [2024-12-09 18:10:45.680985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:22.868 [2024-12-09 18:10:45.680996] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.681003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.681009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1287690) 00:21:22.868 [2024-12-09 18:10:45.681017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:22.868 [2024-12-09 18:10:45.681027] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.681034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.681040] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1287690) 00:21:22.868 [2024-12-09 18:10:45.681049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:22.868 [2024-12-09 18:10:45.681059] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.681066] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.681072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.868 [2024-12-09 18:10:45.681080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:22.868 [2024-12-09 18:10:45.681089] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:22.868 [2024-12-09 18:10:45.681108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:22.868 [2024-12-09 18:10:45.681121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.681128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287690) 00:21:22.868 [2024-12-09 18:10:45.681138] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.868 [2024-12-09 18:10:45.681175] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0 00:21:22.868 [2024-12-09 18:10:45.681186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9280, cid 1, qid 0 00:21:22.868 [2024-12-09 18:10:45.681193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9400, cid 2, qid 0 00:21:22.868 [2024-12-09 18:10:45.681204] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.868 [2024-12-09 18:10:45.681212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9700, cid 4, qid 0 00:21:22.868 [2024-12-09 18:10:45.681397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.868 [2024-12-09 18:10:45.681411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.868 [2024-12-09 18:10:45.681418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.681424] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9700) on tqpair=0x1287690 00:21:22.868 [2024-12-09 18:10:45.681433] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:22.868 [2024-12-09 18:10:45.681442] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:22.868 [2024-12-09 18:10:45.681460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.681470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287690) 00:21:22.868 [2024-12-09 18:10:45.681480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.868 [2024-12-09 18:10:45.681501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9700, cid 4, qid 0 00:21:22.868 [2024-12-09 18:10:45.681597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:22.868 [2024-12-09 18:10:45.681612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:22.868 [2024-12-09 18:10:45.681619] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.681625] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287690): datao=0, datal=4096, cccid=4 00:21:22.868 [2024-12-09 18:10:45.681632] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e9700) on tqpair(0x1287690): expected_datao=0, payload_size=4096 00:21:22.868 [2024-12-09 18:10:45.681640] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.681658] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.681667] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.681678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.868 [2024-12-09 18:10:45.681688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.868 [2024-12-09 18:10:45.681694] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.681701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9700) on tqpair=0x1287690 00:21:22.868 [2024-12-09 18:10:45.681720] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:22.868 [2024-12-09 18:10:45.681759] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.681770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287690) 00:21:22.868 [2024-12-09 18:10:45.681780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.868 [2024-12-09 18:10:45.681791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.681799] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.681805] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1287690) 00:21:22.868 [2024-12-09 18:10:45.681814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:22.868 [2024-12-09 18:10:45.681841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9700, cid 4, qid 0 00:21:22.868 [2024-12-09 18:10:45.681852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9880, cid 5, qid 0 00:21:22.868 [2024-12-09 18:10:45.681978] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:22.868 [2024-12-09 18:10:45.681991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:22.868 [2024-12-09 18:10:45.681998] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.682004] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287690): datao=0, datal=1024, cccid=4 00:21:22.868 [2024-12-09 18:10:45.682011] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e9700) on tqpair(0x1287690): expected_datao=0, payload_size=1024 00:21:22.868 [2024-12-09 18:10:45.682018] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.682028] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.682034] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.682043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.868 [2024-12-09 18:10:45.682051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.868 [2024-12-09 18:10:45.682058] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.682064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9880) on tqpair=0x1287690 00:21:22.868 [2024-12-09 18:10:45.722644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.868 [2024-12-09 18:10:45.722663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.868 [2024-12-09 18:10:45.722671] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.722678] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9700) on tqpair=0x1287690 00:21:22.868 [2024-12-09 18:10:45.722697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.722706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287690) 00:21:22.868 [2024-12-09 18:10:45.722717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.868 [2024-12-09 18:10:45.722747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9700, cid 4, qid 0 00:21:22.868 [2024-12-09 18:10:45.722854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:22.868 [2024-12-09 18:10:45.722866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:22.868 [2024-12-09 18:10:45.722873] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.722879] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287690): datao=0, datal=3072, cccid=4 00:21:22.868 [2024-12-09 18:10:45.722887] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e9700) on tqpair(0x1287690): expected_datao=0, payload_size=3072 00:21:22.868 [2024-12-09 18:10:45.722894] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.722913] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.722922] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.767559] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.868 [2024-12-09 18:10:45.767577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.868 [2024-12-09 18:10:45.767584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.767606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9700) on tqpair=0x1287690 00:21:22.868 [2024-12-09 18:10:45.767622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.767631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287690) 00:21:22.868 [2024-12-09 18:10:45.767642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.868 [2024-12-09 18:10:45.767672] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9700, cid 4, qid 0 00:21:22.868 [2024-12-09 18:10:45.767769] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:22.868 [2024-12-09 18:10:45.767786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:22.868 [2024-12-09 18:10:45.767793] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.767799] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287690): datao=0, datal=8, cccid=4 00:21:22.868 [2024-12-09 18:10:45.767807] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e9700) on tqpair(0x1287690): expected_datao=0, payload_size=8 00:21:22.868 [2024-12-09 18:10:45.767814] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.767824] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.767831] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.810562] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.868 [2024-12-09 18:10:45.810581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.868 [2024-12-09 18:10:45.810589] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.868 [2024-12-09 18:10:45.810611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9700) on tqpair=0x1287690 00:21:22.868 ===================================================== 00:21:22.868 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:22.868 ===================================================== 00:21:22.868 Controller Capabilities/Features 00:21:22.868 ================================ 00:21:22.868 Vendor ID: 0000 00:21:22.868 Subsystem Vendor ID: 0000 00:21:22.868 Serial Number: .................... 00:21:22.869 Model Number: ........................................ 00:21:22.869 Firmware Version: 25.01 00:21:22.869 Recommended Arb Burst: 0 00:21:22.869 IEEE OUI Identifier: 00 00 00 00:21:22.869 Multi-path I/O 00:21:22.869 May have multiple subsystem ports: No 00:21:22.869 May have multiple controllers: No 00:21:22.869 Associated with SR-IOV VF: No 00:21:22.869 Max Data Transfer Size: 131072 00:21:22.869 Max Number of Namespaces: 0 00:21:22.869 Max Number of I/O Queues: 1024 00:21:22.869 NVMe Specification Version (VS): 1.3 00:21:22.869 NVMe Specification Version (Identify): 1.3 00:21:22.869 Maximum Queue Entries: 128 00:21:22.869 Contiguous Queues Required: Yes 00:21:22.869 Arbitration Mechanisms Supported 00:21:22.869 Weighted Round Robin: Not Supported 00:21:22.869 Vendor Specific: Not Supported 00:21:22.869 Reset Timeout: 15000 ms 00:21:22.869 Doorbell Stride: 4 bytes 00:21:22.869 NVM Subsystem Reset: Not Supported 00:21:22.869 Command Sets Supported 00:21:22.869 NVM Command Set: Supported 00:21:22.869 Boot Partition: Not Supported 00:21:22.869 Memory Page Size Minimum: 4096 bytes 00:21:22.869 Memory Page Size Maximum: 4096 bytes 00:21:22.869 Persistent Memory Region: Not Supported 00:21:22.869 Optional Asynchronous Events Supported 00:21:22.869 Namespace Attribute Notices: Not Supported 00:21:22.869 Firmware Activation Notices: Not Supported 00:21:22.869 ANA Change Notices: Not Supported 00:21:22.869 PLE Aggregate Log Change Notices: Not Supported 00:21:22.869 LBA Status Info Alert Notices: Not Supported 00:21:22.869 EGE Aggregate Log Change Notices: Not Supported 00:21:22.869 Normal NVM Subsystem Shutdown event: Not Supported 00:21:22.869 Zone Descriptor Change Notices: Not Supported 00:21:22.869 Discovery Log Change Notices: Supported 00:21:22.869 Controller Attributes 00:21:22.869 128-bit Host Identifier: Not Supported 00:21:22.869 Non-Operational Permissive Mode: Not Supported 00:21:22.869 NVM Sets: Not Supported 00:21:22.869 Read Recovery Levels: Not Supported 00:21:22.869 Endurance Groups: Not Supported 00:21:22.869 Predictable Latency Mode: Not Supported 00:21:22.869 Traffic Based Keep ALive: Not Supported 00:21:22.869 Namespace Granularity: Not Supported 00:21:22.869 SQ Associations: Not Supported 00:21:22.869 UUID List: Not Supported 00:21:22.869 Multi-Domain Subsystem: Not Supported 00:21:22.869 Fixed Capacity Management: Not Supported 00:21:22.869 Variable Capacity Management: Not Supported 00:21:22.869 Delete Endurance Group: Not Supported 00:21:22.869 Delete NVM Set: Not Supported 00:21:22.869 Extended LBA Formats Supported: Not Supported 00:21:22.869 Flexible Data Placement Supported: Not Supported 00:21:22.869 00:21:22.869 Controller Memory Buffer Support 00:21:22.869 ================================ 00:21:22.869 Supported: No 00:21:22.869 00:21:22.869 Persistent Memory Region Support 00:21:22.869 ================================ 00:21:22.869 Supported: No 00:21:22.869 00:21:22.869 Admin Command Set Attributes 00:21:22.869 ============================ 00:21:22.869 Security Send/Receive: Not Supported 00:21:22.869 Format NVM: Not Supported 00:21:22.869 Firmware Activate/Download: Not Supported 00:21:22.869 Namespace Management: Not Supported 00:21:22.869 Device Self-Test: Not Supported 00:21:22.869 Directives: Not Supported 00:21:22.869 NVMe-MI: Not Supported 00:21:22.869 Virtualization Management: Not Supported 00:21:22.869 Doorbell Buffer Config: Not Supported 00:21:22.869 Get LBA Status Capability: Not Supported 00:21:22.869 Command & Feature Lockdown Capability: Not Supported 00:21:22.869 Abort Command Limit: 1 00:21:22.869 Async Event Request Limit: 4 00:21:22.869 Number of Firmware Slots: N/A 00:21:22.869 Firmware Slot 1 Read-Only: N/A 00:21:22.869 Firmware Activation Without Reset: N/A 00:21:22.869 Multiple Update Detection Support: N/A 00:21:22.869 Firmware Update Granularity: No Information Provided 00:21:22.869 Per-Namespace SMART Log: No 00:21:22.869 Asymmetric Namespace Access Log Page: Not Supported 00:21:22.869 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:22.869 Command Effects Log Page: Not Supported 00:21:22.869 Get Log Page Extended Data: Supported 00:21:22.869 Telemetry Log Pages: Not Supported 00:21:22.869 Persistent Event Log Pages: Not Supported 00:21:22.869 Supported Log Pages Log Page: May Support 00:21:22.869 Commands Supported & Effects Log Page: Not Supported 00:21:22.869 Feature Identifiers & Effects Log Page:May Support 00:21:22.869 NVMe-MI Commands & Effects Log Page: May Support 00:21:22.869 Data Area 4 for Telemetry Log: Not Supported 00:21:22.869 Error Log Page Entries Supported: 128 00:21:22.869 Keep Alive: Not Supported 00:21:22.869 00:21:22.869 NVM Command Set Attributes 00:21:22.869 ========================== 00:21:22.869 Submission Queue Entry Size 00:21:22.869 Max: 1 00:21:22.869 Min: 1 00:21:22.869 Completion Queue Entry Size 00:21:22.869 Max: 1 00:21:22.869 Min: 1 00:21:22.869 Number of Namespaces: 0 00:21:22.869 Compare Command: Not Supported 00:21:22.869 Write Uncorrectable Command: Not Supported 00:21:22.869 Dataset Management Command: Not Supported 00:21:22.869 Write Zeroes Command: Not Supported 00:21:22.869 Set Features Save Field: Not Supported 00:21:22.869 Reservations: Not Supported 00:21:22.869 Timestamp: Not Supported 00:21:22.869 Copy: Not Supported 00:21:22.869 Volatile Write Cache: Not Present 00:21:22.869 Atomic Write Unit (Normal): 1 00:21:22.869 Atomic Write Unit (PFail): 1 00:21:22.869 Atomic Compare & Write Unit: 1 00:21:22.869 Fused Compare & Write: Supported 00:21:22.869 Scatter-Gather List 00:21:22.869 SGL Command Set: Supported 00:21:22.869 SGL Keyed: Supported 00:21:22.869 SGL Bit Bucket Descriptor: Not Supported 00:21:22.869 SGL Metadata Pointer: Not Supported 00:21:22.869 Oversized SGL: Not Supported 00:21:22.869 SGL Metadata Address: Not Supported 00:21:22.869 SGL Offset: Supported 00:21:22.869 Transport SGL Data Block: Not Supported 00:21:22.869 Replay Protected Memory Block: Not Supported 00:21:22.869 00:21:22.869 Firmware Slot Information 00:21:22.869 ========================= 00:21:22.869 Active slot: 0 00:21:22.869 00:21:22.869 00:21:22.869 Error Log 00:21:22.869 ========= 00:21:22.869 00:21:22.869 Active Namespaces 00:21:22.869 ================= 00:21:22.869 Discovery Log Page 00:21:22.869 ================== 00:21:22.869 Generation Counter: 2 00:21:22.869 Number of Records: 2 00:21:22.869 Record Format: 0 00:21:22.869 00:21:22.869 Discovery Log Entry 0 00:21:22.869 ---------------------- 00:21:22.869 Transport Type: 3 (TCP) 00:21:22.869 Address Family: 1 (IPv4) 00:21:22.869 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:22.869 Entry Flags: 00:21:22.869 Duplicate Returned Information: 1 00:21:22.869 Explicit Persistent Connection Support for Discovery: 1 00:21:22.869 Transport Requirements: 00:21:22.869 Secure Channel: Not Required 00:21:22.869 Port ID: 0 (0x0000) 00:21:22.869 Controller ID: 65535 (0xffff) 00:21:22.869 Admin Max SQ Size: 128 00:21:22.869 Transport Service Identifier: 4420 00:21:22.869 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:22.869 Transport Address: 10.0.0.2 00:21:22.869 Discovery Log Entry 1 00:21:22.869 ---------------------- 00:21:22.869 Transport Type: 3 (TCP) 00:21:22.869 Address Family: 1 (IPv4) 00:21:22.869 Subsystem Type: 2 (NVM Subsystem) 00:21:22.869 Entry Flags: 00:21:22.869 Duplicate Returned Information: 0 00:21:22.869 Explicit Persistent Connection Support for Discovery: 0 00:21:22.869 Transport Requirements: 00:21:22.869 Secure Channel: Not Required 00:21:22.869 Port ID: 0 (0x0000) 00:21:22.869 Controller ID: 65535 (0xffff) 00:21:22.869 Admin Max SQ Size: 128 00:21:22.869 Transport Service Identifier: 4420 00:21:22.869 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:22.869 Transport Address: 10.0.0.2 [2024-12-09 18:10:45.810731] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:22.869 [2024-12-09 18:10:45.810753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690 00:21:22.869 [2024-12-09 18:10:45.810766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.869 [2024-12-09 18:10:45.810775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9280) on tqpair=0x1287690 00:21:22.869 [2024-12-09 18:10:45.810783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.869 [2024-12-09 18:10:45.810791] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9400) on tqpair=0x1287690 00:21:22.869 [2024-12-09 18:10:45.810798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.869 [2024-12-09 18:10:45.810806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.869 [2024-12-09 18:10:45.810813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.869 [2024-12-09 18:10:45.810831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.869 [2024-12-09 18:10:45.810840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.810846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.870 [2024-12-09 18:10:45.810857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.870 [2024-12-09 18:10:45.810882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.870 [2024-12-09 18:10:45.810959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.870 [2024-12-09 18:10:45.810973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.870 [2024-12-09 18:10:45.810980] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.810987] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.870 [2024-12-09 18:10:45.810998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.811006] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.811013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.870 [2024-12-09 18:10:45.811023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.870 [2024-12-09 18:10:45.811049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.870 [2024-12-09 18:10:45.811145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.870 [2024-12-09 18:10:45.811165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.870 [2024-12-09 18:10:45.811173] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.811179] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.870 [2024-12-09 18:10:45.811188] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:22.870 [2024-12-09 18:10:45.811196] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:22.870 [2024-12-09 18:10:45.811212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.811222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.811228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.870 [2024-12-09 18:10:45.811238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.870 [2024-12-09 18:10:45.811260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.870 [2024-12-09 18:10:45.811387] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.870 [2024-12-09 18:10:45.811400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.870 [2024-12-09 18:10:45.811407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.811413] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.870 [2024-12-09 18:10:45.811430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.811439] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.811445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.870 [2024-12-09 18:10:45.811456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.870 [2024-12-09 18:10:45.811476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.870 [2024-12-09 18:10:45.811555] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.870 [2024-12-09 18:10:45.811570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.870 [2024-12-09 18:10:45.811576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.811583] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.870 [2024-12-09 18:10:45.811599] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.811608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.811614] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.870 [2024-12-09 18:10:45.811625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.870 [2024-12-09 18:10:45.811645] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.870 [2024-12-09 18:10:45.811758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.870 [2024-12-09 18:10:45.811769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.870 [2024-12-09 18:10:45.811776] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.811782] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.870 [2024-12-09 18:10:45.811798] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.811807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.811813] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.870 [2024-12-09 18:10:45.811823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.870 [2024-12-09 18:10:45.811847] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.870 [2024-12-09 18:10:45.811924] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.870 [2024-12-09 18:10:45.811937] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.870 [2024-12-09 18:10:45.811944] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.811950] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.870 [2024-12-09 18:10:45.811966] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.811976] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.811982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.870 [2024-12-09 18:10:45.811992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.870 [2024-12-09 18:10:45.812012] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.870 [2024-12-09 18:10:45.812091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.870 [2024-12-09 18:10:45.812104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.870 [2024-12-09 18:10:45.812111] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.812117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.870 [2024-12-09 18:10:45.812133] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.812143] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.812149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.870 [2024-12-09 18:10:45.812159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.870 [2024-12-09 18:10:45.812179] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.870 [2024-12-09 18:10:45.812252] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.870 [2024-12-09 18:10:45.812265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.870 [2024-12-09 18:10:45.812272] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.812278] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.870 [2024-12-09 18:10:45.812294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.812304] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.812310] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.870 [2024-12-09 18:10:45.812320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.870 [2024-12-09 18:10:45.812340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.870 [2024-12-09 18:10:45.812416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.870 [2024-12-09 18:10:45.812428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.870 [2024-12-09 18:10:45.812435] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.812442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.870 [2024-12-09 18:10:45.812457] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.812467] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.812473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.870 [2024-12-09 18:10:45.812483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.870 [2024-12-09 18:10:45.812503] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.870 [2024-12-09 18:10:45.812632] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.870 [2024-12-09 18:10:45.812646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.870 [2024-12-09 18:10:45.812653] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.812659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.870 [2024-12-09 18:10:45.812675] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.812684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.812691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.870 [2024-12-09 18:10:45.812701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.870 [2024-12-09 18:10:45.812722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.870 [2024-12-09 18:10:45.812844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.870 [2024-12-09 18:10:45.812856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.870 [2024-12-09 18:10:45.812863] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.812869] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.870 [2024-12-09 18:10:45.812885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.812894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.812900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.870 [2024-12-09 18:10:45.812910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.870 [2024-12-09 18:10:45.812930] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.870 [2024-12-09 18:10:45.813006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.870 [2024-12-09 18:10:45.813019] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.870 [2024-12-09 18:10:45.813026] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.813032] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.870 [2024-12-09 18:10:45.813048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.813057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.870 [2024-12-09 18:10:45.813063] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.871 [2024-12-09 18:10:45.813074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.871 [2024-12-09 18:10:45.813094] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.871 [2024-12-09 18:10:45.813170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.871 [2024-12-09 18:10:45.813182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.871 [2024-12-09 18:10:45.813189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.813195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.871 [2024-12-09 18:10:45.813211] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.813220] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.813227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.871 [2024-12-09 18:10:45.813237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.871 [2024-12-09 18:10:45.813257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.871 [2024-12-09 18:10:45.813334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.871 [2024-12-09 18:10:45.813351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.871 [2024-12-09 18:10:45.813358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.813365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.871 [2024-12-09 18:10:45.813380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.813389] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.813395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.871 [2024-12-09 18:10:45.813405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.871 [2024-12-09 18:10:45.813425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.871 [2024-12-09 18:10:45.813498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.871 [2024-12-09 18:10:45.813509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.871 [2024-12-09 18:10:45.813516] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.813523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.871 [2024-12-09 18:10:45.813538] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.813555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.813562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.871 [2024-12-09 18:10:45.813572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.871 [2024-12-09 18:10:45.813592] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.871 [2024-12-09 18:10:45.813666] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.871 [2024-12-09 18:10:45.813679] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.871 [2024-12-09 18:10:45.813685] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.813692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.871 [2024-12-09 18:10:45.813708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.813717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.813723] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.871 [2024-12-09 18:10:45.813733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.871 [2024-12-09 18:10:45.813754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.871 [2024-12-09 18:10:45.813826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.871 [2024-12-09 18:10:45.813837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.871 [2024-12-09 18:10:45.813844] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.813850] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.871 [2024-12-09 18:10:45.813866] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.813875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.813881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.871 [2024-12-09 18:10:45.813891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.871 [2024-12-09 18:10:45.813911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.871 [2024-12-09 18:10:45.813986] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.871 [2024-12-09 18:10:45.813999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.871 [2024-12-09 18:10:45.814010] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.814017] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.871 [2024-12-09 18:10:45.814033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.814042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.814048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.871 [2024-12-09 18:10:45.814058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.871 [2024-12-09 18:10:45.814079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.871 [2024-12-09 18:10:45.814156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.871 [2024-12-09 18:10:45.814169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.871 [2024-12-09 18:10:45.814175] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.814182] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.871 [2024-12-09 18:10:45.814198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.814207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.814213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.871 [2024-12-09 18:10:45.814223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.871 [2024-12-09 18:10:45.814243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.871 [2024-12-09 18:10:45.814319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.871 [2024-12-09 18:10:45.814332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.871 [2024-12-09 18:10:45.814338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.814345] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.871 [2024-12-09 18:10:45.814360] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.814370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.814376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.871 [2024-12-09 18:10:45.814386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.871 [2024-12-09 18:10:45.814407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.871 [2024-12-09 18:10:45.814479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.871 [2024-12-09 18:10:45.814491] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.871 [2024-12-09 18:10:45.814497] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.814504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.871 [2024-12-09 18:10:45.814519] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.814528] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.814534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:22.871 [2024-12-09 18:10:45.818552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.871 [2024-12-09 18:10:45.818582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:22.871 [2024-12-09 18:10:45.818730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:22.871 [2024-12-09 18:10:45.818742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:22.871 [2024-12-09 18:10:45.818749] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:22.871 [2024-12-09 18:10:45.818760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:22.871 [2024-12-09 18:10:45.818774] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:21:22.871 00:21:22.871 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:22.871 [2024-12-09 18:10:45.851756] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:21:22.871 [2024-12-09 18:10:45.851800] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1530358 ] 00:21:22.871 [2024-12-09 18:10:45.898110] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:22.871 [2024-12-09 18:10:45.898167] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:22.871 [2024-12-09 18:10:45.898177] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:22.871 [2024-12-09 18:10:45.898192] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:22.871 [2024-12-09 18:10:45.898204] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:22.871 [2024-12-09 18:10:45.901848] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:22.871 [2024-12-09 18:10:45.901888] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xad9690 0 00:21:23.133 [2024-12-09 18:10:45.909563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:23.133 [2024-12-09 18:10:45.909583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:23.133 [2024-12-09 18:10:45.909590] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:23.133 [2024-12-09 18:10:45.909597] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:23.133 [2024-12-09 18:10:45.909646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.133 [2024-12-09 18:10:45.909658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.133 [2024-12-09 18:10:45.909665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad9690) 00:21:23.133 [2024-12-09 18:10:45.909679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:23.133 [2024-12-09 18:10:45.909706] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b100, cid 0, qid 0 00:21:23.133 [2024-12-09 18:10:45.917561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.133 [2024-12-09 18:10:45.917579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.133 [2024-12-09 18:10:45.917587] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.133 [2024-12-09 18:10:45.917594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b100) on tqpair=0xad9690 00:21:23.133 [2024-12-09 18:10:45.917607] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:23.133 [2024-12-09 18:10:45.917633] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:23.133 [2024-12-09 18:10:45.917643] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:23.133 [2024-12-09 18:10:45.917661] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.133 [2024-12-09 18:10:45.917670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.133 [2024-12-09 18:10:45.917680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad9690) 00:21:23.134 [2024-12-09 18:10:45.917692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.134 [2024-12-09 18:10:45.917717] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b100, cid 0, qid 0 00:21:23.134 [2024-12-09 18:10:45.917831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.134 [2024-12-09 18:10:45.917845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.134 [2024-12-09 18:10:45.917852] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.917858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b100) on tqpair=0xad9690 00:21:23.134 [2024-12-09 18:10:45.917867] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:23.134 [2024-12-09 18:10:45.917880] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:23.134 [2024-12-09 18:10:45.917892] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.917900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.917906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad9690) 00:21:23.134 [2024-12-09 18:10:45.917917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.134 [2024-12-09 18:10:45.917938] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b100, cid 0, qid 0 00:21:23.134 [2024-12-09 18:10:45.918020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.134 [2024-12-09 18:10:45.918032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.134 [2024-12-09 18:10:45.918039] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.918046] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b100) on tqpair=0xad9690 00:21:23.134 [2024-12-09 18:10:45.918054] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:23.134 [2024-12-09 18:10:45.918067] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:23.134 [2024-12-09 18:10:45.918080] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.918088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.918094] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad9690) 00:21:23.134 [2024-12-09 18:10:45.918104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.134 [2024-12-09 18:10:45.918125] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b100, cid 0, qid 0 00:21:23.134 [2024-12-09 18:10:45.918195] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.134 [2024-12-09 18:10:45.918207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.134 [2024-12-09 18:10:45.918214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.918221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b100) on tqpair=0xad9690 00:21:23.134 [2024-12-09 18:10:45.918229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:23.134 [2024-12-09 18:10:45.918246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.918255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.918261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad9690) 00:21:23.134 [2024-12-09 18:10:45.918271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.134 [2024-12-09 18:10:45.918292] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b100, cid 0, qid 0 00:21:23.134 [2024-12-09 18:10:45.918363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.134 [2024-12-09 18:10:45.918376] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.134 [2024-12-09 18:10:45.918383] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.918389] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b100) on tqpair=0xad9690 00:21:23.134 [2024-12-09 18:10:45.918397] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:23.134 [2024-12-09 18:10:45.918405] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:23.134 [2024-12-09 18:10:45.918417] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:23.134 [2024-12-09 18:10:45.918527] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:23.134 [2024-12-09 18:10:45.918535] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:23.134 [2024-12-09 18:10:45.918555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.918565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.918572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad9690) 00:21:23.134 [2024-12-09 18:10:45.918582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.134 [2024-12-09 18:10:45.918604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b100, cid 0, qid 0 00:21:23.134 [2024-12-09 18:10:45.918711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.134 [2024-12-09 18:10:45.918723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.134 [2024-12-09 18:10:45.918729] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.918736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b100) on tqpair=0xad9690 00:21:23.134 [2024-12-09 18:10:45.918744] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:23.134 [2024-12-09 18:10:45.918760] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.918770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.918776] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad9690) 00:21:23.134 [2024-12-09 18:10:45.918786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.134 [2024-12-09 18:10:45.918807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b100, cid 0, qid 0 00:21:23.134 [2024-12-09 18:10:45.918884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.134 [2024-12-09 18:10:45.918896] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.134 [2024-12-09 18:10:45.918903] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.918910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b100) on tqpair=0xad9690 00:21:23.134 [2024-12-09 18:10:45.918918] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:23.134 [2024-12-09 18:10:45.918926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:23.134 [2024-12-09 18:10:45.918939] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:23.134 [2024-12-09 18:10:45.918953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:23.134 [2024-12-09 18:10:45.918972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.918981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad9690) 00:21:23.134 [2024-12-09 18:10:45.918992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.134 [2024-12-09 18:10:45.919012] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b100, cid 0, qid 0 00:21:23.134 [2024-12-09 18:10:45.919123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:23.134 [2024-12-09 18:10:45.919135] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:23.134 [2024-12-09 18:10:45.919142] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.919148] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad9690): datao=0, datal=4096, cccid=0 00:21:23.134 [2024-12-09 18:10:45.919156] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb3b100) on tqpair(0xad9690): expected_datao=0, payload_size=4096 00:21:23.134 [2024-12-09 18:10:45.919163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.919173] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.919180] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.919192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.134 [2024-12-09 18:10:45.919201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.134 [2024-12-09 18:10:45.919208] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.919215] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b100) on tqpair=0xad9690 00:21:23.134 [2024-12-09 18:10:45.919225] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:23.134 [2024-12-09 18:10:45.919238] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:23.134 [2024-12-09 18:10:45.919246] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:23.134 [2024-12-09 18:10:45.919253] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:23.134 [2024-12-09 18:10:45.919260] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:23.134 [2024-12-09 18:10:45.919268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:23.134 [2024-12-09 18:10:45.919282] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:23.134 [2024-12-09 18:10:45.919294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.919302] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.919308] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad9690) 00:21:23.134 [2024-12-09 18:10:45.919319] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:23.134 [2024-12-09 18:10:45.919340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b100, cid 0, qid 0 00:21:23.134 [2024-12-09 18:10:45.919411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.134 [2024-12-09 18:10:45.919423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.134 [2024-12-09 18:10:45.919430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.919436] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b100) on tqpair=0xad9690 00:21:23.134 [2024-12-09 18:10:45.919446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.919454] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.134 [2024-12-09 18:10:45.919464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad9690) 00:21:23.135 [2024-12-09 18:10:45.919474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.135 [2024-12-09 18:10:45.919484] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.919491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.919497] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xad9690) 00:21:23.135 [2024-12-09 18:10:45.919506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.135 [2024-12-09 18:10:45.919516] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.919523] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.919529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xad9690) 00:21:23.135 [2024-12-09 18:10:45.919538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.135 [2024-12-09 18:10:45.919557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.919566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.919573] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad9690) 00:21:23.135 [2024-12-09 18:10:45.919581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.135 [2024-12-09 18:10:45.919590] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:23.135 [2024-12-09 18:10:45.919609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:23.135 [2024-12-09 18:10:45.919622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.919629] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad9690) 00:21:23.135 [2024-12-09 18:10:45.919639] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.135 [2024-12-09 18:10:45.919661] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b100, cid 0, qid 0 00:21:23.135 [2024-12-09 18:10:45.919673] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b280, cid 1, qid 0 00:21:23.135 [2024-12-09 18:10:45.919681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b400, cid 2, qid 0 00:21:23.135 [2024-12-09 18:10:45.919688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b580, cid 3, qid 0 00:21:23.135 [2024-12-09 18:10:45.919696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b700, cid 4, qid 0 00:21:23.135 [2024-12-09 18:10:45.919834] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.135 [2024-12-09 18:10:45.919846] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.135 [2024-12-09 18:10:45.919853] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.919859] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b700) on tqpair=0xad9690 00:21:23.135 [2024-12-09 18:10:45.919867] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:23.135 [2024-12-09 18:10:45.919876] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:23.135 [2024-12-09 18:10:45.919889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:23.135 [2024-12-09 18:10:45.919900] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:23.135 [2024-12-09 18:10:45.919914] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.919922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.919929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad9690) 00:21:23.135 [2024-12-09 18:10:45.919939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:23.135 [2024-12-09 18:10:45.919960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b700, cid 4, qid 0 00:21:23.135 [2024-12-09 18:10:45.920079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.135 [2024-12-09 18:10:45.920093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.135 [2024-12-09 18:10:45.920099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.920106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b700) on tqpair=0xad9690 00:21:23.135 [2024-12-09 18:10:45.920175] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:23.135 [2024-12-09 18:10:45.920196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:23.135 [2024-12-09 18:10:45.920211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.920219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad9690) 00:21:23.135 [2024-12-09 18:10:45.920229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.135 [2024-12-09 18:10:45.920250] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b700, cid 4, qid 0 00:21:23.135 [2024-12-09 18:10:45.920345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:23.135 [2024-12-09 18:10:45.920357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:23.135 [2024-12-09 18:10:45.920363] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.920370] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad9690): datao=0, datal=4096, cccid=4 00:21:23.135 [2024-12-09 18:10:45.920377] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb3b700) on tqpair(0xad9690): expected_datao=0, payload_size=4096 00:21:23.135 [2024-12-09 18:10:45.920384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.920394] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.920402] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.920413] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.135 [2024-12-09 18:10:45.920423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.135 [2024-12-09 18:10:45.920429] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.920436] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b700) on tqpair=0xad9690 00:21:23.135 [2024-12-09 18:10:45.920452] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:23.135 [2024-12-09 18:10:45.920476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:23.135 [2024-12-09 18:10:45.920495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:23.135 [2024-12-09 18:10:45.920508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.920516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad9690) 00:21:23.135 [2024-12-09 18:10:45.920527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.135 [2024-12-09 18:10:45.920557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b700, cid 4, qid 0 00:21:23.135 [2024-12-09 18:10:45.920660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:23.135 [2024-12-09 18:10:45.920672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:23.135 [2024-12-09 18:10:45.920679] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.920685] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad9690): datao=0, datal=4096, cccid=4 00:21:23.135 [2024-12-09 18:10:45.920693] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb3b700) on tqpair(0xad9690): expected_datao=0, payload_size=4096 00:21:23.135 [2024-12-09 18:10:45.920700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.920715] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.920724] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.920735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.135 [2024-12-09 18:10:45.920744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.135 [2024-12-09 18:10:45.920751] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.920758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b700) on tqpair=0xad9690 00:21:23.135 [2024-12-09 18:10:45.920779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:23.135 [2024-12-09 18:10:45.920798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:23.135 [2024-12-09 18:10:45.920813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.920821] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad9690) 00:21:23.135 [2024-12-09 18:10:45.920831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.135 [2024-12-09 18:10:45.920853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b700, cid 4, qid 0 00:21:23.135 [2024-12-09 18:10:45.920949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:23.135 [2024-12-09 18:10:45.920961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:23.135 [2024-12-09 18:10:45.920967] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.920973] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad9690): datao=0, datal=4096, cccid=4 00:21:23.135 [2024-12-09 18:10:45.920981] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb3b700) on tqpair(0xad9690): expected_datao=0, payload_size=4096 00:21:23.135 [2024-12-09 18:10:45.920988] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.920998] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.921005] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.921017] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.135 [2024-12-09 18:10:45.921026] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.135 [2024-12-09 18:10:45.921033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.135 [2024-12-09 18:10:45.921039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b700) on tqpair=0xad9690 00:21:23.136 [2024-12-09 18:10:45.921052] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:23.136 [2024-12-09 18:10:45.921066] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:23.136 [2024-12-09 18:10:45.921081] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:23.136 [2024-12-09 18:10:45.921095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:23.136 [2024-12-09 18:10:45.921107] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:23.136 [2024-12-09 18:10:45.921117] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:23.136 [2024-12-09 18:10:45.921126] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:23.136 [2024-12-09 18:10:45.921134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:23.136 [2024-12-09 18:10:45.921142] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:23.136 [2024-12-09 18:10:45.921160] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.921168] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad9690) 00:21:23.136 [2024-12-09 18:10:45.921179] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.136 [2024-12-09 18:10:45.921190] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.921197] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.921203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xad9690) 00:21:23.136 [2024-12-09 18:10:45.921212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.136 [2024-12-09 18:10:45.921237] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b700, cid 4, qid 0 00:21:23.136 [2024-12-09 18:10:45.921264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b880, cid 5, qid 0 00:21:23.136 [2024-12-09 18:10:45.921434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.136 [2024-12-09 18:10:45.921446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.136 [2024-12-09 18:10:45.921453] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.921460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b700) on tqpair=0xad9690 00:21:23.136 [2024-12-09 18:10:45.921469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.136 [2024-12-09 18:10:45.921478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.136 [2024-12-09 18:10:45.921485] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.921491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b880) on tqpair=0xad9690 00:21:23.136 [2024-12-09 18:10:45.921506] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.921515] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xad9690) 00:21:23.136 [2024-12-09 18:10:45.921526] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.136 [2024-12-09 18:10:45.925554] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b880, cid 5, qid 0 00:21:23.136 [2024-12-09 18:10:45.925576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.136 [2024-12-09 18:10:45.925587] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.136 [2024-12-09 18:10:45.925595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.925601] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b880) on tqpair=0xad9690 00:21:23.136 [2024-12-09 18:10:45.925619] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.925628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xad9690) 00:21:23.136 [2024-12-09 18:10:45.925639] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.136 [2024-12-09 18:10:45.925666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b880, cid 5, qid 0 00:21:23.136 [2024-12-09 18:10:45.925784] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.136 [2024-12-09 18:10:45.925798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.136 [2024-12-09 18:10:45.925805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.925811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b880) on tqpair=0xad9690 00:21:23.136 [2024-12-09 18:10:45.925827] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.925836] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xad9690) 00:21:23.136 [2024-12-09 18:10:45.925847] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.136 [2024-12-09 18:10:45.925867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b880, cid 5, qid 0 00:21:23.136 [2024-12-09 18:10:45.925954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.136 [2024-12-09 18:10:45.925968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.136 [2024-12-09 18:10:45.925974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.925981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b880) on tqpair=0xad9690 00:21:23.136 [2024-12-09 18:10:45.926007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.926018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xad9690) 00:21:23.136 [2024-12-09 18:10:45.926029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.136 [2024-12-09 18:10:45.926041] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.926049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad9690) 00:21:23.136 [2024-12-09 18:10:45.926058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.136 [2024-12-09 18:10:45.926070] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.926077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xad9690) 00:21:23.136 [2024-12-09 18:10:45.926087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.136 [2024-12-09 18:10:45.926100] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.926108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xad9690) 00:21:23.136 [2024-12-09 18:10:45.926117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.136 [2024-12-09 18:10:45.926139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b880, cid 5, qid 0 00:21:23.136 [2024-12-09 18:10:45.926150] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b700, cid 4, qid 0 00:21:23.136 [2024-12-09 18:10:45.926158] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3ba00, cid 6, qid 0 00:21:23.136 [2024-12-09 18:10:45.926165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3bb80, cid 7, qid 0 00:21:23.136 [2024-12-09 18:10:45.926365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:23.136 [2024-12-09 18:10:45.926377] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:23.136 [2024-12-09 18:10:45.926383] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.926390] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad9690): datao=0, datal=8192, cccid=5 00:21:23.136 [2024-12-09 18:10:45.926401] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb3b880) on tqpair(0xad9690): expected_datao=0, payload_size=8192 00:21:23.136 [2024-12-09 18:10:45.926409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.926427] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.926436] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.926448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:23.136 [2024-12-09 18:10:45.926458] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:23.136 [2024-12-09 18:10:45.926464] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.926471] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad9690): datao=0, datal=512, cccid=4 00:21:23.136 [2024-12-09 18:10:45.926478] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb3b700) on tqpair(0xad9690): expected_datao=0, payload_size=512 00:21:23.136 [2024-12-09 18:10:45.926485] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.926494] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.926501] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.926509] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:23.136 [2024-12-09 18:10:45.926518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:23.136 [2024-12-09 18:10:45.926524] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.926530] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad9690): datao=0, datal=512, cccid=6 00:21:23.136 [2024-12-09 18:10:45.926537] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb3ba00) on tqpair(0xad9690): expected_datao=0, payload_size=512 00:21:23.136 [2024-12-09 18:10:45.926552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.926562] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.926569] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.926578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:23.136 [2024-12-09 18:10:45.926586] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:23.136 [2024-12-09 18:10:45.926593] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.926599] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad9690): datao=0, datal=4096, cccid=7 00:21:23.136 [2024-12-09 18:10:45.926606] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb3bb80) on tqpair(0xad9690): expected_datao=0, payload_size=4096 00:21:23.136 [2024-12-09 18:10:45.926613] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.926622] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.926629] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.926641] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.136 [2024-12-09 18:10:45.926650] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.136 [2024-12-09 18:10:45.926656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.136 [2024-12-09 18:10:45.926663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b880) on tqpair=0xad9690 00:21:23.136 [2024-12-09 18:10:45.926681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.136 [2024-12-09 18:10:45.926692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.136 [2024-12-09 18:10:45.926699] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.137 [2024-12-09 18:10:45.926706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b700) on tqpair=0xad9690 00:21:23.137 [2024-12-09 18:10:45.926721] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.137 [2024-12-09 18:10:45.926731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.137 [2024-12-09 18:10:45.926738] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.137 [2024-12-09 18:10:45.926747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3ba00) on tqpair=0xad9690 00:21:23.137 [2024-12-09 18:10:45.926758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.137 [2024-12-09 18:10:45.926768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.137 [2024-12-09 18:10:45.926774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.137 [2024-12-09 18:10:45.926780] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3bb80) on tqpair=0xad9690 00:21:23.137 ===================================================== 00:21:23.137 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:23.137 ===================================================== 00:21:23.137 Controller Capabilities/Features 00:21:23.137 ================================ 00:21:23.137 Vendor ID: 8086 00:21:23.137 Subsystem Vendor ID: 8086 00:21:23.137 Serial Number: SPDK00000000000001 00:21:23.137 Model Number: SPDK bdev Controller 00:21:23.137 Firmware Version: 25.01 00:21:23.137 Recommended Arb Burst: 6 00:21:23.137 IEEE OUI Identifier: e4 d2 5c 00:21:23.137 Multi-path I/O 00:21:23.137 May have multiple subsystem ports: Yes 00:21:23.137 May have multiple controllers: Yes 00:21:23.137 Associated with SR-IOV VF: No 00:21:23.137 Max Data Transfer Size: 131072 00:21:23.137 Max Number of Namespaces: 32 00:21:23.137 Max Number of I/O Queues: 127 00:21:23.137 NVMe Specification Version (VS): 1.3 00:21:23.137 NVMe Specification Version (Identify): 1.3 00:21:23.137 Maximum Queue Entries: 128 00:21:23.137 Contiguous Queues Required: Yes 00:21:23.137 Arbitration Mechanisms Supported 00:21:23.137 Weighted Round Robin: Not Supported 00:21:23.137 Vendor Specific: Not Supported 00:21:23.137 Reset Timeout: 15000 ms 00:21:23.137 Doorbell Stride: 4 bytes 00:21:23.137 NVM Subsystem Reset: Not Supported 00:21:23.137 Command Sets Supported 00:21:23.137 NVM Command Set: Supported 00:21:23.137 Boot Partition: Not Supported 00:21:23.137 Memory Page Size Minimum: 4096 bytes 00:21:23.137 Memory Page Size Maximum: 4096 bytes 00:21:23.137 Persistent Memory Region: Not Supported 00:21:23.137 Optional Asynchronous Events Supported 00:21:23.137 Namespace Attribute Notices: Supported 00:21:23.137 Firmware Activation Notices: Not Supported 00:21:23.137 ANA Change Notices: Not Supported 00:21:23.137 PLE Aggregate Log Change Notices: Not Supported 00:21:23.137 LBA Status Info Alert Notices: Not Supported 00:21:23.137 EGE Aggregate Log Change Notices: Not Supported 00:21:23.137 Normal NVM Subsystem Shutdown event: Not Supported 00:21:23.137 Zone Descriptor Change Notices: Not Supported 00:21:23.137 Discovery Log Change Notices: Not Supported 00:21:23.137 Controller Attributes 00:21:23.137 128-bit Host Identifier: Supported 00:21:23.137 Non-Operational Permissive Mode: Not Supported 00:21:23.137 NVM Sets: Not Supported 00:21:23.137 Read Recovery Levels: Not Supported 00:21:23.137 Endurance Groups: Not Supported 00:21:23.137 Predictable Latency Mode: Not Supported 00:21:23.137 Traffic Based Keep ALive: Not Supported 00:21:23.137 Namespace Granularity: Not Supported 00:21:23.137 SQ Associations: Not Supported 00:21:23.137 UUID List: Not Supported 00:21:23.137 Multi-Domain Subsystem: Not Supported 00:21:23.137 Fixed Capacity Management: Not Supported 00:21:23.137 Variable Capacity Management: Not Supported 00:21:23.137 Delete Endurance Group: Not Supported 00:21:23.137 Delete NVM Set: Not Supported 00:21:23.137 Extended LBA Formats Supported: Not Supported 00:21:23.137 Flexible Data Placement Supported: Not Supported 00:21:23.137 00:21:23.137 Controller Memory Buffer Support 00:21:23.137 ================================ 00:21:23.137 Supported: No 00:21:23.137 00:21:23.137 Persistent Memory Region Support 00:21:23.137 ================================ 00:21:23.137 Supported: No 00:21:23.137 00:21:23.137 Admin Command Set Attributes 00:21:23.137 ============================ 00:21:23.137 Security Send/Receive: Not Supported 00:21:23.137 Format NVM: Not Supported 00:21:23.137 Firmware Activate/Download: Not Supported 00:21:23.137 Namespace Management: Not Supported 00:21:23.137 Device Self-Test: Not Supported 00:21:23.137 Directives: Not Supported 00:21:23.137 NVMe-MI: Not Supported 00:21:23.137 Virtualization Management: Not Supported 00:21:23.137 Doorbell Buffer Config: Not Supported 00:21:23.137 Get LBA Status Capability: Not Supported 00:21:23.137 Command & Feature Lockdown Capability: Not Supported 00:21:23.137 Abort Command Limit: 4 00:21:23.137 Async Event Request Limit: 4 00:21:23.137 Number of Firmware Slots: N/A 00:21:23.137 Firmware Slot 1 Read-Only: N/A 00:21:23.137 Firmware Activation Without Reset: N/A 00:21:23.137 Multiple Update Detection Support: N/A 00:21:23.137 Firmware Update Granularity: No Information Provided 00:21:23.137 Per-Namespace SMART Log: No 00:21:23.137 Asymmetric Namespace Access Log Page: Not Supported 00:21:23.137 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:23.137 Command Effects Log Page: Supported 00:21:23.137 Get Log Page Extended Data: Supported 00:21:23.137 Telemetry Log Pages: Not Supported 00:21:23.137 Persistent Event Log Pages: Not Supported 00:21:23.137 Supported Log Pages Log Page: May Support 00:21:23.137 Commands Supported & Effects Log Page: Not Supported 00:21:23.137 Feature Identifiers & Effects Log Page:May Support 00:21:23.137 NVMe-MI Commands & Effects Log Page: May Support 00:21:23.137 Data Area 4 for Telemetry Log: Not Supported 00:21:23.137 Error Log Page Entries Supported: 128 00:21:23.137 Keep Alive: Supported 00:21:23.137 Keep Alive Granularity: 10000 ms 00:21:23.137 00:21:23.137 NVM Command Set Attributes 00:21:23.137 ========================== 00:21:23.137 Submission Queue Entry Size 00:21:23.137 Max: 64 00:21:23.137 Min: 64 00:21:23.137 Completion Queue Entry Size 00:21:23.137 Max: 16 00:21:23.137 Min: 16 00:21:23.137 Number of Namespaces: 32 00:21:23.137 Compare Command: Supported 00:21:23.137 Write Uncorrectable Command: Not Supported 00:21:23.137 Dataset Management Command: Supported 00:21:23.137 Write Zeroes Command: Supported 00:21:23.137 Set Features Save Field: Not Supported 00:21:23.137 Reservations: Supported 00:21:23.137 Timestamp: Not Supported 00:21:23.137 Copy: Supported 00:21:23.137 Volatile Write Cache: Present 00:21:23.137 Atomic Write Unit (Normal): 1 00:21:23.137 Atomic Write Unit (PFail): 1 00:21:23.137 Atomic Compare & Write Unit: 1 00:21:23.137 Fused Compare & Write: Supported 00:21:23.137 Scatter-Gather List 00:21:23.137 SGL Command Set: Supported 00:21:23.137 SGL Keyed: Supported 00:21:23.137 SGL Bit Bucket Descriptor: Not Supported 00:21:23.137 SGL Metadata Pointer: Not Supported 00:21:23.137 Oversized SGL: Not Supported 00:21:23.137 SGL Metadata Address: Not Supported 00:21:23.137 SGL Offset: Supported 00:21:23.137 Transport SGL Data Block: Not Supported 00:21:23.137 Replay Protected Memory Block: Not Supported 00:21:23.137 00:21:23.137 Firmware Slot Information 00:21:23.137 ========================= 00:21:23.137 Active slot: 1 00:21:23.137 Slot 1 Firmware Revision: 25.01 00:21:23.137 00:21:23.137 00:21:23.137 Commands Supported and Effects 00:21:23.137 ============================== 00:21:23.137 Admin Commands 00:21:23.137 -------------- 00:21:23.137 Get Log Page (02h): Supported 00:21:23.137 Identify (06h): Supported 00:21:23.137 Abort (08h): Supported 00:21:23.137 Set Features (09h): Supported 00:21:23.137 Get Features (0Ah): Supported 00:21:23.137 Asynchronous Event Request (0Ch): Supported 00:21:23.137 Keep Alive (18h): Supported 00:21:23.137 I/O Commands 00:21:23.137 ------------ 00:21:23.137 Flush (00h): Supported LBA-Change 00:21:23.137 Write (01h): Supported LBA-Change 00:21:23.137 Read (02h): Supported 00:21:23.137 Compare (05h): Supported 00:21:23.137 Write Zeroes (08h): Supported LBA-Change 00:21:23.137 Dataset Management (09h): Supported LBA-Change 00:21:23.137 Copy (19h): Supported LBA-Change 00:21:23.137 00:21:23.137 Error Log 00:21:23.137 ========= 00:21:23.137 00:21:23.137 Arbitration 00:21:23.137 =========== 00:21:23.137 Arbitration Burst: 1 00:21:23.137 00:21:23.137 Power Management 00:21:23.137 ================ 00:21:23.137 Number of Power States: 1 00:21:23.137 Current Power State: Power State #0 00:21:23.137 Power State #0: 00:21:23.137 Max Power: 0.00 W 00:21:23.137 Non-Operational State: Operational 00:21:23.137 Entry Latency: Not Reported 00:21:23.137 Exit Latency: Not Reported 00:21:23.137 Relative Read Throughput: 0 00:21:23.137 Relative Read Latency: 0 00:21:23.137 Relative Write Throughput: 0 00:21:23.137 Relative Write Latency: 0 00:21:23.137 Idle Power: Not Reported 00:21:23.137 Active Power: Not Reported 00:21:23.137 Non-Operational Permissive Mode: Not Supported 00:21:23.137 00:21:23.137 Health Information 00:21:23.137 ================== 00:21:23.137 Critical Warnings: 00:21:23.137 Available Spare Space: OK 00:21:23.138 Temperature: OK 00:21:23.138 Device Reliability: OK 00:21:23.138 Read Only: No 00:21:23.138 Volatile Memory Backup: OK 00:21:23.138 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:23.138 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:23.138 Available Spare: 0% 00:21:23.138 Available Spare Threshold: 0% 00:21:23.138 Life Percentage Used:[2024-12-09 18:10:45.926891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.926919] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xad9690) 00:21:23.138 [2024-12-09 18:10:45.926930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.138 [2024-12-09 18:10:45.926952] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3bb80, cid 7, qid 0 00:21:23.138 [2024-12-09 18:10:45.927090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.138 [2024-12-09 18:10:45.927103] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.138 [2024-12-09 18:10:45.927110] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.927116] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3bb80) on tqpair=0xad9690 00:21:23.138 [2024-12-09 18:10:45.927165] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:23.138 [2024-12-09 18:10:45.927185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b100) on tqpair=0xad9690 00:21:23.138 [2024-12-09 18:10:45.927195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.138 [2024-12-09 18:10:45.927204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b280) on tqpair=0xad9690 00:21:23.138 [2024-12-09 18:10:45.927212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.138 [2024-12-09 18:10:45.927220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b400) on tqpair=0xad9690 00:21:23.138 [2024-12-09 18:10:45.927227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.138 [2024-12-09 18:10:45.927235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b580) on tqpair=0xad9690 00:21:23.138 [2024-12-09 18:10:45.927243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.138 [2024-12-09 18:10:45.927255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.927263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.927269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad9690) 00:21:23.138 [2024-12-09 18:10:45.927279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.138 [2024-12-09 18:10:45.927301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b580, cid 3, qid 0 00:21:23.138 [2024-12-09 18:10:45.927400] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.138 [2024-12-09 18:10:45.927412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.138 [2024-12-09 18:10:45.927419] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.927426] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b580) on tqpair=0xad9690 00:21:23.138 [2024-12-09 18:10:45.927437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.927445] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.927452] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad9690) 00:21:23.138 [2024-12-09 18:10:45.927462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.138 [2024-12-09 18:10:45.927492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b580, cid 3, qid 0 00:21:23.138 [2024-12-09 18:10:45.927588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.138 [2024-12-09 18:10:45.927602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.138 [2024-12-09 18:10:45.927609] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.927616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b580) on tqpair=0xad9690 00:21:23.138 [2024-12-09 18:10:45.927624] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:23.138 [2024-12-09 18:10:45.927633] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:23.138 [2024-12-09 18:10:45.927651] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.927661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.927669] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad9690) 00:21:23.138 [2024-12-09 18:10:45.927682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.138 [2024-12-09 18:10:45.927703] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b580, cid 3, qid 0 00:21:23.138 [2024-12-09 18:10:45.927777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.138 [2024-12-09 18:10:45.927789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.138 [2024-12-09 18:10:45.927796] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.927802] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b580) on tqpair=0xad9690 00:21:23.138 [2024-12-09 18:10:45.927818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.927827] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.927834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad9690) 00:21:23.138 [2024-12-09 18:10:45.927844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.138 [2024-12-09 18:10:45.927864] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b580, cid 3, qid 0 00:21:23.138 [2024-12-09 18:10:45.927945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.138 [2024-12-09 18:10:45.927958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.138 [2024-12-09 18:10:45.927965] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.927972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b580) on tqpair=0xad9690 00:21:23.138 [2024-12-09 18:10:45.927988] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.927997] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.928004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad9690) 00:21:23.138 [2024-12-09 18:10:45.928015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.138 [2024-12-09 18:10:45.928035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b580, cid 3, qid 0 00:21:23.138 [2024-12-09 18:10:45.928110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.138 [2024-12-09 18:10:45.928123] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.138 [2024-12-09 18:10:45.928129] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.928136] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b580) on tqpair=0xad9690 00:21:23.138 [2024-12-09 18:10:45.928153] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.928163] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.928170] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad9690) 00:21:23.138 [2024-12-09 18:10:45.928184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.138 [2024-12-09 18:10:45.928206] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b580, cid 3, qid 0 00:21:23.138 [2024-12-09 18:10:45.928279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.138 [2024-12-09 18:10:45.928294] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.138 [2024-12-09 18:10:45.928301] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.928308] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b580) on tqpair=0xad9690 00:21:23.138 [2024-12-09 18:10:45.928324] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.928334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.928340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad9690) 00:21:23.138 [2024-12-09 18:10:45.928351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.138 [2024-12-09 18:10:45.928372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b580, cid 3, qid 0 00:21:23.138 [2024-12-09 18:10:45.928445] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.138 [2024-12-09 18:10:45.928458] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.138 [2024-12-09 18:10:45.928465] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.928471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b580) on tqpair=0xad9690 00:21:23.138 [2024-12-09 18:10:45.928487] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.928497] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.138 [2024-12-09 18:10:45.928503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad9690) 00:21:23.139 [2024-12-09 18:10:45.928513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.139 [2024-12-09 18:10:45.928534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b580, cid 3, qid 0 00:21:23.139 [2024-12-09 18:10:45.928618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.139 [2024-12-09 18:10:45.928632] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.139 [2024-12-09 18:10:45.928639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.139 [2024-12-09 18:10:45.928645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b580) on tqpair=0xad9690 00:21:23.139 [2024-12-09 18:10:45.928661] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.139 [2024-12-09 18:10:45.928671] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.139 [2024-12-09 18:10:45.928677] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad9690) 00:21:23.139 [2024-12-09 18:10:45.928687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.139 [2024-12-09 18:10:45.928708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b580, cid 3, qid 0 00:21:23.139 [2024-12-09 18:10:45.928783] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.139 [2024-12-09 18:10:45.928795] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.139 [2024-12-09 18:10:45.928802] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.139 [2024-12-09 18:10:45.928809] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b580) on tqpair=0xad9690 00:21:23.139 [2024-12-09 18:10:45.928824] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.139 [2024-12-09 18:10:45.928834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.139 [2024-12-09 18:10:45.928840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad9690) 00:21:23.139 [2024-12-09 18:10:45.928850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.139 [2024-12-09 18:10:45.928875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b580, cid 3, qid 0 00:21:23.139 [2024-12-09 18:10:45.928945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.139 [2024-12-09 18:10:45.928957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.139 [2024-12-09 18:10:45.928963] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.139 [2024-12-09 18:10:45.928970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b580) on tqpair=0xad9690 00:21:23.139 [2024-12-09 18:10:45.928985] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.139 [2024-12-09 18:10:45.928995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.139 [2024-12-09 18:10:45.929001] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad9690) 00:21:23.139 [2024-12-09 18:10:45.929011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.139 [2024-12-09 18:10:45.929031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b580, cid 3, qid 0 00:21:23.139 [2024-12-09 18:10:45.929103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.139 [2024-12-09 18:10:45.929114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.139 [2024-12-09 18:10:45.929121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.139 [2024-12-09 18:10:45.929128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b580) on tqpair=0xad9690 00:21:23.139 [2024-12-09 18:10:45.929143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.139 [2024-12-09 18:10:45.929153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.139 [2024-12-09 18:10:45.929159] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad9690) 00:21:23.139 [2024-12-09 18:10:45.929169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.139 [2024-12-09 18:10:45.929188] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b580, cid 3, qid 0 00:21:23.139 [2024-12-09 18:10:45.929262] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.139 [2024-12-09 18:10:45.929274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.139 [2024-12-09 18:10:45.929281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.139 [2024-12-09 18:10:45.929288] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b580) on tqpair=0xad9690 00:21:23.139 [2024-12-09 18:10:45.929303] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.139 [2024-12-09 18:10:45.929313] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.139 [2024-12-09 18:10:45.929319] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad9690) 00:21:23.139 [2024-12-09 18:10:45.929329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.139 [2024-12-09 18:10:45.929349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b580, cid 3, qid 0 00:21:23.139 [2024-12-09 18:10:45.929423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.139 [2024-12-09 18:10:45.929435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.139 [2024-12-09 18:10:45.929442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.139 [2024-12-09 18:10:45.929449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b580) on tqpair=0xad9690 00:21:23.139 [2024-12-09 18:10:45.929464] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.139 [2024-12-09 18:10:45.929474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.139 [2024-12-09 18:10:45.929480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad9690) 00:21:23.139 [2024-12-09 18:10:45.929490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.139 [2024-12-09 18:10:45.929514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b580, cid 3, qid 0 00:21:23.139 [2024-12-09 18:10:45.933560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.139 [2024-12-09 18:10:45.933577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.139 [2024-12-09 18:10:45.933584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.139 [2024-12-09 18:10:45.933591] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b580) on tqpair=0xad9690 00:21:23.139 [2024-12-09 18:10:45.933608] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.139 [2024-12-09 18:10:45.933618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.139 [2024-12-09 18:10:45.933624] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad9690) 00:21:23.139 [2024-12-09 18:10:45.933635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.139 [2024-12-09 18:10:45.933657] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b580, cid 3, qid 0 00:21:23.139 [2024-12-09 18:10:45.933748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.139 [2024-12-09 18:10:45.933762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.139 [2024-12-09 18:10:45.933768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.139 [2024-12-09 18:10:45.933775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3b580) on tqpair=0xad9690 00:21:23.139 [2024-12-09 18:10:45.933788] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:21:23.139 0% 00:21:23.139 Data Units Read: 0 00:21:23.139 Data Units Written: 0 00:21:23.139 Host Read Commands: 0 00:21:23.139 Host Write Commands: 0 00:21:23.139 Controller Busy Time: 0 minutes 00:21:23.139 Power Cycles: 0 00:21:23.139 Power On Hours: 0 hours 00:21:23.139 Unsafe Shutdowns: 0 00:21:23.139 Unrecoverable Media Errors: 0 00:21:23.139 Lifetime Error Log Entries: 0 00:21:23.139 Warning Temperature Time: 0 minutes 00:21:23.139 Critical Temperature Time: 0 minutes 00:21:23.139 00:21:23.139 Number of Queues 00:21:23.139 ================ 00:21:23.139 Number of I/O Submission Queues: 127 00:21:23.139 Number of I/O Completion Queues: 127 00:21:23.139 00:21:23.139 Active Namespaces 00:21:23.139 ================= 00:21:23.139 Namespace ID:1 00:21:23.139 Error Recovery Timeout: Unlimited 00:21:23.139 Command Set Identifier: NVM (00h) 00:21:23.139 Deallocate: Supported 00:21:23.139 Deallocated/Unwritten Error: Not Supported 00:21:23.139 Deallocated Read Value: Unknown 00:21:23.139 Deallocate in Write Zeroes: Not Supported 00:21:23.139 Deallocated Guard Field: 0xFFFF 00:21:23.139 Flush: Supported 00:21:23.139 Reservation: Supported 00:21:23.139 Namespace Sharing Capabilities: Multiple Controllers 00:21:23.139 Size (in LBAs): 131072 (0GiB) 00:21:23.139 Capacity (in LBAs): 131072 (0GiB) 00:21:23.139 Utilization (in LBAs): 131072 (0GiB) 00:21:23.139 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:23.139 EUI64: ABCDEF0123456789 00:21:23.139 UUID: dc0f8cf5-a7e9-4886-8909-f70fbaf79f68 00:21:23.139 Thin Provisioning: Not Supported 00:21:23.139 Per-NS Atomic Units: Yes 00:21:23.139 Atomic Boundary Size (Normal): 0 00:21:23.139 Atomic Boundary Size (PFail): 0 00:21:23.139 Atomic Boundary Offset: 0 00:21:23.139 Maximum Single Source Range Length: 65535 00:21:23.139 Maximum Copy Length: 65535 00:21:23.139 Maximum Source Range Count: 1 00:21:23.139 NGUID/EUI64 Never Reused: No 00:21:23.139 Namespace Write Protected: No 00:21:23.139 Number of LBA Formats: 1 00:21:23.139 Current LBA Format: LBA Format #00 00:21:23.139 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:23.139 00:21:23.139 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:23.139 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:23.139 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.139 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.139 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.139 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:23.139 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:23.139 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:23.139 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:23.139 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:23.139 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:23.139 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:23.139 18:10:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:23.139 rmmod nvme_tcp 00:21:23.139 rmmod nvme_fabrics 00:21:23.139 rmmod nvme_keyring 00:21:23.140 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:23.140 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:23.140 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:23.140 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1530209 ']' 00:21:23.140 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1530209 00:21:23.140 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1530209 ']' 00:21:23.140 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1530209 00:21:23.140 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:21:23.140 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:23.140 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1530209 00:21:23.140 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:23.140 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:23.140 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1530209' 00:21:23.140 killing process with pid 1530209 00:21:23.140 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1530209 00:21:23.140 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1530209 00:21:23.398 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:23.398 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:23.398 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:23.398 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:21:23.398 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:21:23.398 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:23.398 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:21:23.398 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:23.398 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:23.398 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.398 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:23.398 18:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:25.937 00:21:25.937 real 0m5.690s 00:21:25.937 user 0m4.828s 00:21:25.937 sys 0m1.914s 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:25.937 ************************************ 00:21:25.937 END TEST nvmf_identify 00:21:25.937 ************************************ 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.937 ************************************ 00:21:25.937 START TEST nvmf_perf 00:21:25.937 ************************************ 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:25.937 * Looking for test storage... 00:21:25.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:25.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.937 --rc genhtml_branch_coverage=1 00:21:25.937 --rc genhtml_function_coverage=1 00:21:25.937 --rc genhtml_legend=1 00:21:25.937 --rc geninfo_all_blocks=1 00:21:25.937 --rc geninfo_unexecuted_blocks=1 00:21:25.937 00:21:25.937 ' 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:25.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.937 --rc genhtml_branch_coverage=1 00:21:25.937 --rc genhtml_function_coverage=1 00:21:25.937 --rc genhtml_legend=1 00:21:25.937 --rc geninfo_all_blocks=1 00:21:25.937 --rc geninfo_unexecuted_blocks=1 00:21:25.937 00:21:25.937 ' 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:25.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.937 --rc genhtml_branch_coverage=1 00:21:25.937 --rc genhtml_function_coverage=1 00:21:25.937 --rc genhtml_legend=1 00:21:25.937 --rc geninfo_all_blocks=1 00:21:25.937 --rc geninfo_unexecuted_blocks=1 00:21:25.937 00:21:25.937 ' 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:25.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.937 --rc genhtml_branch_coverage=1 00:21:25.937 --rc genhtml_function_coverage=1 00:21:25.937 --rc genhtml_legend=1 00:21:25.937 --rc geninfo_all_blocks=1 00:21:25.937 --rc geninfo_unexecuted_blocks=1 00:21:25.937 00:21:25.937 ' 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.937 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:25.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:25.938 18:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:27.843 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:27.843 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:27.843 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:27.843 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:27.843 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:27.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:21:27.844 00:21:27.844 --- 10.0.0.2 ping statistics --- 00:21:27.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.844 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:21:27.844 00:21:27.844 --- 10.0.0.1 ping statistics --- 00:21:27.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.844 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1532301 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1532301 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1532301 ']' 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.844 18:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:28.104 [2024-12-09 18:10:50.905288] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:21:28.104 [2024-12-09 18:10:50.905384] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.104 [2024-12-09 18:10:50.977243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:28.104 [2024-12-09 18:10:51.032808] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.104 [2024-12-09 18:10:51.032857] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.104 [2024-12-09 18:10:51.032886] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.104 [2024-12-09 18:10:51.032897] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.104 [2024-12-09 18:10:51.032906] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.104 [2024-12-09 18:10:51.034469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.104 [2024-12-09 18:10:51.034525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:28.104 [2024-12-09 18:10:51.034592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:28.104 [2024-12-09 18:10:51.034596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.362 18:10:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.362 18:10:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:21:28.362 18:10:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:28.362 18:10:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:28.362 18:10:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:28.362 18:10:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.362 18:10:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:28.362 18:10:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:31.651 18:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:31.651 18:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:31.651 18:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:21:31.651 18:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:31.909 18:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:31.909 18:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:21:31.909 18:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:31.909 18:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:31.909 18:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:32.167 [2024-12-09 18:10:55.142865] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.167 18:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:32.425 18:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:32.425 18:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:32.683 18:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:32.683 18:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:33.249 18:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:33.249 [2024-12-09 18:10:56.233680] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.249 18:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:33.506 18:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:21:33.506 18:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:21:33.506 18:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:33.506 18:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:21:34.886 Initializing NVMe Controllers 00:21:34.886 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:21:34.886 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:21:34.886 Initialization complete. Launching workers. 00:21:34.886 ======================================================== 00:21:34.886 Latency(us) 00:21:34.886 Device Information : IOPS MiB/s Average min max 00:21:34.886 PCIE (0000:88:00.0) NSID 1 from core 0: 84838.77 331.40 376.69 37.45 8258.59 00:21:34.886 ======================================================== 00:21:34.886 Total : 84838.77 331.40 376.69 37.45 8258.59 00:21:34.886 00:21:34.886 18:10:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:36.265 Initializing NVMe Controllers 00:21:36.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:36.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:36.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:36.265 Initialization complete. Launching workers. 00:21:36.265 ======================================================== 00:21:36.265 Latency(us) 00:21:36.265 Device Information : IOPS MiB/s Average min max 00:21:36.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 347.00 1.36 3009.45 140.38 45822.42 00:21:36.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 18649.95 7917.08 47929.40 00:21:36.265 ======================================================== 00:21:36.265 Total : 403.00 1.57 5182.82 140.38 47929.40 00:21:36.265 00:21:36.265 18:10:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:37.644 Initializing NVMe Controllers 00:21:37.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:37.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:37.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:37.644 Initialization complete. Launching workers. 00:21:37.644 ======================================================== 00:21:37.644 Latency(us) 00:21:37.644 Device Information : IOPS MiB/s Average min max 00:21:37.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8119.80 31.72 3962.21 849.56 46097.18 00:21:37.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3775.68 14.75 8488.92 5906.74 16304.07 00:21:37.644 ======================================================== 00:21:37.644 Total : 11895.48 46.47 5399.01 849.56 46097.18 00:21:37.644 00:21:37.644 18:11:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:37.644 18:11:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:37.644 18:11:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:40.180 Initializing NVMe Controllers 00:21:40.180 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:40.180 Controller IO queue size 128, less than required. 00:21:40.180 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:40.180 Controller IO queue size 128, less than required. 00:21:40.180 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:40.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:40.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:40.180 Initialization complete. Launching workers. 00:21:40.180 ======================================================== 00:21:40.180 Latency(us) 00:21:40.180 Device Information : IOPS MiB/s Average min max 00:21:40.180 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1681.49 420.37 76758.09 56578.30 115620.85 00:21:40.180 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 583.80 145.95 227243.54 77914.37 342305.60 00:21:40.180 ======================================================== 00:21:40.180 Total : 2265.30 566.32 115540.60 56578.30 342305.60 00:21:40.180 00:21:40.438 18:11:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:40.695 No valid NVMe controllers or AIO or URING devices found 00:21:40.695 Initializing NVMe Controllers 00:21:40.695 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:40.695 Controller IO queue size 128, less than required. 00:21:40.695 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:40.695 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:40.695 Controller IO queue size 128, less than required. 00:21:40.695 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:40.695 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:40.695 WARNING: Some requested NVMe devices were skipped 00:21:40.696 18:11:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:43.236 Initializing NVMe Controllers 00:21:43.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:43.236 Controller IO queue size 128, less than required. 00:21:43.236 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:43.236 Controller IO queue size 128, less than required. 00:21:43.236 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:43.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:43.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:43.236 Initialization complete. Launching workers. 00:21:43.236 00:21:43.236 ==================== 00:21:43.236 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:43.236 TCP transport: 00:21:43.236 polls: 9878 00:21:43.236 idle_polls: 6770 00:21:43.236 sock_completions: 3108 00:21:43.236 nvme_completions: 5589 00:21:43.236 submitted_requests: 8378 00:21:43.236 queued_requests: 1 00:21:43.236 00:21:43.236 ==================== 00:21:43.236 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:43.236 TCP transport: 00:21:43.236 polls: 12639 00:21:43.236 idle_polls: 8813 00:21:43.236 sock_completions: 3826 00:21:43.236 nvme_completions: 6569 00:21:43.236 submitted_requests: 9854 00:21:43.236 queued_requests: 1 00:21:43.236 ======================================================== 00:21:43.236 Latency(us) 00:21:43.236 Device Information : IOPS MiB/s Average min max 00:21:43.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1396.18 349.04 94596.01 54739.76 142964.74 00:21:43.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1641.04 410.26 78522.07 40403.75 112193.39 00:21:43.236 ======================================================== 00:21:43.236 Total : 3037.22 759.30 85911.11 40403.75 142964.74 00:21:43.236 00:21:43.236 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:43.236 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:43.495 rmmod nvme_tcp 00:21:43.495 rmmod nvme_fabrics 00:21:43.495 rmmod nvme_keyring 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1532301 ']' 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1532301 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1532301 ']' 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1532301 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1532301 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1532301' 00:21:43.495 killing process with pid 1532301 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1532301 00:21:43.495 18:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1532301 00:21:45.396 18:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:45.396 18:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:45.396 18:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:45.396 18:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:21:45.396 18:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:21:45.396 18:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:45.396 18:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:21:45.396 18:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:45.396 18:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:45.396 18:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.396 18:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.396 18:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:47.303 00:21:47.303 real 0m21.716s 00:21:47.303 user 1m6.628s 00:21:47.303 sys 0m5.756s 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:47.303 ************************************ 00:21:47.303 END TEST nvmf_perf 00:21:47.303 ************************************ 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.303 ************************************ 00:21:47.303 START TEST nvmf_fio_host 00:21:47.303 ************************************ 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:47.303 * Looking for test storage... 00:21:47.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:47.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.303 --rc genhtml_branch_coverage=1 00:21:47.303 --rc genhtml_function_coverage=1 00:21:47.303 --rc genhtml_legend=1 00:21:47.303 --rc geninfo_all_blocks=1 00:21:47.303 --rc geninfo_unexecuted_blocks=1 00:21:47.303 00:21:47.303 ' 00:21:47.303 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:47.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.304 --rc genhtml_branch_coverage=1 00:21:47.304 --rc genhtml_function_coverage=1 00:21:47.304 --rc genhtml_legend=1 00:21:47.304 --rc geninfo_all_blocks=1 00:21:47.304 --rc geninfo_unexecuted_blocks=1 00:21:47.304 00:21:47.304 ' 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:47.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.304 --rc genhtml_branch_coverage=1 00:21:47.304 --rc genhtml_function_coverage=1 00:21:47.304 --rc genhtml_legend=1 00:21:47.304 --rc geninfo_all_blocks=1 00:21:47.304 --rc geninfo_unexecuted_blocks=1 00:21:47.304 00:21:47.304 ' 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:47.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.304 --rc genhtml_branch_coverage=1 00:21:47.304 --rc genhtml_function_coverage=1 00:21:47.304 --rc genhtml_legend=1 00:21:47.304 --rc geninfo_all_blocks=1 00:21:47.304 --rc geninfo_unexecuted_blocks=1 00:21:47.304 00:21:47.304 ' 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.304 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:47.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:21:47.563 18:11:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:49.511 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:49.511 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:49.511 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:49.511 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.511 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:49.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:21:49.771 00:21:49.771 --- 10.0.0.2 ping statistics --- 00:21:49.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.771 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:21:49.771 00:21:49.771 --- 10.0.0.1 ping statistics --- 00:21:49.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.771 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1536275 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1536275 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1536275 ']' 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.771 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.771 [2024-12-09 18:11:12.718598] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:21:49.771 [2024-12-09 18:11:12.718684] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.771 [2024-12-09 18:11:12.793293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:50.029 [2024-12-09 18:11:12.853372] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.029 [2024-12-09 18:11:12.853428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.029 [2024-12-09 18:11:12.853456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.029 [2024-12-09 18:11:12.853468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.029 [2024-12-09 18:11:12.853477] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.029 [2024-12-09 18:11:12.855048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.029 [2024-12-09 18:11:12.855111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.029 [2024-12-09 18:11:12.855178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:50.029 [2024-12-09 18:11:12.855182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.029 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.029 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:21:50.029 18:11:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:50.287 [2024-12-09 18:11:13.282879] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.287 18:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:50.287 18:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:50.287 18:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.545 18:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:50.803 Malloc1 00:21:50.803 18:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:51.061 18:11:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:51.319 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:51.576 [2024-12-09 18:11:14.454668] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.577 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:51.834 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:51.834 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:51.834 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:51.834 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:51.834 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:51.834 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:51.834 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:51.834 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:51.834 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:51.834 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:51.834 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:51.834 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:51.834 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:51.834 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:51.835 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:51.835 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:51.835 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:51.835 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:51.835 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:51.835 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:51.835 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:51.835 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:51.835 18:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:52.093 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:52.093 fio-3.35 00:21:52.093 Starting 1 thread 00:21:54.620 00:21:54.620 test: (groupid=0, jobs=1): err= 0: pid=1536701: Mon Dec 9 18:11:17 2024 00:21:54.620 read: IOPS=7437, BW=29.1MiB/s (30.5MB/s)(58.3MiB/2007msec) 00:21:54.620 slat (nsec): min=1879, max=104075, avg=2442.04, stdev=1503.00 00:21:54.620 clat (usec): min=3177, max=16761, avg=9356.14, stdev=798.14 00:21:54.620 lat (usec): min=3201, max=16763, avg=9358.58, stdev=798.08 00:21:54.620 clat percentiles (usec): 00:21:54.620 | 1.00th=[ 7570], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 8717], 00:21:54.620 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9503], 00:21:54.620 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10290], 95.00th=[10552], 00:21:54.620 | 99.00th=[11076], 99.50th=[11469], 99.90th=[14222], 99.95th=[15401], 00:21:54.620 | 99.99th=[16712] 00:21:54.620 bw ( KiB/s): min=28288, max=30736, per=99.95%, avg=29736.00, stdev=1033.33, samples=4 00:21:54.620 iops : min= 7072, max= 7684, avg=7434.00, stdev=258.33, samples=4 00:21:54.620 write: IOPS=7421, BW=29.0MiB/s (30.4MB/s)(58.2MiB/2007msec); 0 zone resets 00:21:54.620 slat (nsec): min=2060, max=84307, avg=2620.40, stdev=1160.43 00:21:54.620 clat (usec): min=1349, max=13847, avg=7796.42, stdev=668.33 00:21:54.620 lat (usec): min=1355, max=13850, avg=7799.04, stdev=668.29 00:21:54.620 clat percentiles (usec): 00:21:54.620 | 1.00th=[ 6259], 5.00th=[ 6783], 10.00th=[ 7046], 20.00th=[ 7308], 00:21:54.620 | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7767], 60.00th=[ 7963], 00:21:54.620 | 70.00th=[ 8094], 80.00th=[ 8291], 90.00th=[ 8586], 95.00th=[ 8717], 00:21:54.620 | 99.00th=[ 9241], 99.50th=[ 9503], 99.90th=[12125], 99.95th=[13173], 00:21:54.621 | 99.99th=[13829] 00:21:54.621 bw ( KiB/s): min=29320, max=30256, per=99.89%, avg=29654.00, stdev=441.06, samples=4 00:21:54.621 iops : min= 7330, max= 7564, avg=7413.50, stdev=110.26, samples=4 00:21:54.621 lat (msec) : 2=0.01%, 4=0.07%, 10=90.00%, 20=9.92% 00:21:54.621 cpu : usr=62.26%, sys=36.19%, ctx=116, majf=0, minf=35 00:21:54.621 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:54.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:54.621 issued rwts: total=14927,14895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.621 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:54.621 00:21:54.621 Run status group 0 (all jobs): 00:21:54.621 READ: bw=29.1MiB/s (30.5MB/s), 29.1MiB/s-29.1MiB/s (30.5MB/s-30.5MB/s), io=58.3MiB (61.1MB), run=2007-2007msec 00:21:54.621 WRITE: bw=29.0MiB/s (30.4MB/s), 29.0MiB/s-29.0MiB/s (30.4MB/s-30.4MB/s), io=58.2MiB (61.0MB), run=2007-2007msec 00:21:54.621 18:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:54.621 18:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:54.621 18:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:54.621 18:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:54.621 18:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:54.621 18:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:54.621 18:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:54.621 18:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:54.621 18:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:54.621 18:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:54.621 18:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:54.621 18:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:54.621 18:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:54.621 18:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:54.621 18:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:54.621 18:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:54.621 18:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:54.621 18:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:54.621 18:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:54.621 18:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:54.621 18:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:54.621 18:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:54.621 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:54.621 fio-3.35 00:21:54.621 Starting 1 thread 00:21:57.149 00:21:57.149 test: (groupid=0, jobs=1): err= 0: pid=1537084: Mon Dec 9 18:11:19 2024 00:21:57.149 read: IOPS=8196, BW=128MiB/s (134MB/s)(257MiB/2009msec) 00:21:57.149 slat (nsec): min=2993, max=96887, avg=3734.18, stdev=1919.98 00:21:57.149 clat (usec): min=2040, max=16985, avg=8891.19, stdev=2025.98 00:21:57.149 lat (usec): min=2043, max=16989, avg=8894.92, stdev=2026.02 00:21:57.149 clat percentiles (usec): 00:21:57.149 | 1.00th=[ 4752], 5.00th=[ 5604], 10.00th=[ 6259], 20.00th=[ 7177], 00:21:57.149 | 30.00th=[ 7701], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9372], 00:21:57.149 | 70.00th=[10028], 80.00th=[10683], 90.00th=[11600], 95.00th=[12256], 00:21:57.149 | 99.00th=[13566], 99.50th=[14222], 99.90th=[15664], 99.95th=[15926], 00:21:57.149 | 99.99th=[16188] 00:21:57.149 bw ( KiB/s): min=61504, max=73504, per=51.39%, avg=67400.00, stdev=6274.90, samples=4 00:21:57.149 iops : min= 3844, max= 4594, avg=4212.50, stdev=392.18, samples=4 00:21:57.149 write: IOPS=4785, BW=74.8MiB/s (78.4MB/s)(138MiB/1848msec); 0 zone resets 00:21:57.149 slat (usec): min=30, max=208, avg=34.30, stdev= 6.31 00:21:57.150 clat (usec): min=6573, max=23096, avg=11738.77, stdev=2143.56 00:21:57.150 lat (usec): min=6605, max=23127, avg=11773.07, stdev=2143.49 00:21:57.150 clat percentiles (usec): 00:21:57.150 | 1.00th=[ 7767], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9896], 00:21:57.150 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11469], 60.00th=[12125], 00:21:57.150 | 70.00th=[12649], 80.00th=[13566], 90.00th=[14746], 95.00th=[15401], 00:21:57.150 | 99.00th=[17433], 99.50th=[17957], 99.90th=[19006], 99.95th=[22938], 00:21:57.150 | 99.99th=[23200] 00:21:57.150 bw ( KiB/s): min=62752, max=75872, per=91.53%, avg=70088.00, stdev=6497.04, samples=4 00:21:57.150 iops : min= 3922, max= 4742, avg=4380.50, stdev=406.07, samples=4 00:21:57.150 lat (msec) : 4=0.19%, 10=52.91%, 20=46.88%, 50=0.02% 00:21:57.150 cpu : usr=77.74%, sys=21.12%, ctx=41, majf=0, minf=55 00:21:57.150 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:57.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:57.150 issued rwts: total=16467,8844,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:57.150 00:21:57.150 Run status group 0 (all jobs): 00:21:57.150 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=257MiB (270MB), run=2009-2009msec 00:21:57.150 WRITE: bw=74.8MiB/s (78.4MB/s), 74.8MiB/s-74.8MiB/s (78.4MB/s-78.4MB/s), io=138MiB (145MB), run=1848-1848msec 00:21:57.150 18:11:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:57.150 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:57.150 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:57.150 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:57.150 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:57.150 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:57.150 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:21:57.150 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:57.150 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:21:57.150 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:57.150 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:57.150 rmmod nvme_tcp 00:21:57.150 rmmod nvme_fabrics 00:21:57.408 rmmod nvme_keyring 00:21:57.408 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:57.408 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:21:57.408 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:21:57.408 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1536275 ']' 00:21:57.408 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1536275 00:21:57.408 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1536275 ']' 00:21:57.408 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1536275 00:21:57.408 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:21:57.408 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:57.408 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1536275 00:21:57.408 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:57.408 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:57.408 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1536275' 00:21:57.408 killing process with pid 1536275 00:21:57.408 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1536275 00:21:57.408 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1536275 00:21:57.668 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:57.668 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:57.668 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:57.668 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:21:57.668 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:21:57.668 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:21:57.668 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:57.668 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:57.668 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:57.668 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.668 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.668 18:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.576 18:11:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:59.576 00:21:59.576 real 0m12.378s 00:21:59.576 user 0m36.217s 00:21:59.576 sys 0m4.245s 00:21:59.576 18:11:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.576 18:11:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.576 ************************************ 00:21:59.576 END TEST nvmf_fio_host 00:21:59.576 ************************************ 00:21:59.576 18:11:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:59.576 18:11:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:59.576 18:11:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.576 18:11:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.576 ************************************ 00:21:59.576 START TEST nvmf_failover 00:21:59.576 ************************************ 00:21:59.576 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:59.834 * Looking for test storage... 00:21:59.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:59.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.834 --rc genhtml_branch_coverage=1 00:21:59.834 --rc genhtml_function_coverage=1 00:21:59.834 --rc genhtml_legend=1 00:21:59.834 --rc geninfo_all_blocks=1 00:21:59.834 --rc geninfo_unexecuted_blocks=1 00:21:59.834 00:21:59.834 ' 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:59.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.834 --rc genhtml_branch_coverage=1 00:21:59.834 --rc genhtml_function_coverage=1 00:21:59.834 --rc genhtml_legend=1 00:21:59.834 --rc geninfo_all_blocks=1 00:21:59.834 --rc geninfo_unexecuted_blocks=1 00:21:59.834 00:21:59.834 ' 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:59.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.834 --rc genhtml_branch_coverage=1 00:21:59.834 --rc genhtml_function_coverage=1 00:21:59.834 --rc genhtml_legend=1 00:21:59.834 --rc geninfo_all_blocks=1 00:21:59.834 --rc geninfo_unexecuted_blocks=1 00:21:59.834 00:21:59.834 ' 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:59.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.834 --rc genhtml_branch_coverage=1 00:21:59.834 --rc genhtml_function_coverage=1 00:21:59.834 --rc genhtml_legend=1 00:21:59.834 --rc geninfo_all_blocks=1 00:21:59.834 --rc geninfo_unexecuted_blocks=1 00:21:59.834 00:21:59.834 ' 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:21:59.834 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:59.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:21:59.835 18:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:02.366 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.366 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:02.367 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:02.367 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:02.367 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.367 18:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:02.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:22:02.367 00:22:02.367 --- 10.0.0.2 ping statistics --- 00:22:02.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.367 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:22:02.367 00:22:02.367 --- 10.0.0.1 ping statistics --- 00:22:02.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.367 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1539286 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1539286 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1539286 ']' 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:02.367 [2024-12-09 18:11:25.092557] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:22:02.367 [2024-12-09 18:11:25.092651] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.367 [2024-12-09 18:11:25.168767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:02.367 [2024-12-09 18:11:25.228603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.367 [2024-12-09 18:11:25.228673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.367 [2024-12-09 18:11:25.228702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.367 [2024-12-09 18:11:25.228714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.367 [2024-12-09 18:11:25.228724] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.367 [2024-12-09 18:11:25.230302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.367 [2024-12-09 18:11:25.230364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:02.367 [2024-12-09 18:11:25.230367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.367 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:02.932 [2024-12-09 18:11:25.670505] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.932 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:03.190 Malloc0 00:22:03.190 18:11:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:03.448 18:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:03.705 18:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:03.962 [2024-12-09 18:11:26.807244] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.962 18:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:04.219 [2024-12-09 18:11:27.128164] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:04.219 18:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:04.477 [2024-12-09 18:11:27.421198] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:04.477 18:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1539583 00:22:04.477 18:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:04.477 18:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:04.477 18:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1539583 /var/tmp/bdevperf.sock 00:22:04.477 18:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1539583 ']' 00:22:04.477 18:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:04.477 18:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.477 18:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:04.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:04.477 18:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.477 18:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:04.734 18:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.734 18:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:04.734 18:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:05.299 NVMe0n1 00:22:05.299 18:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:05.557 00:22:05.814 18:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1539719 00:22:05.814 18:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:05.814 18:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:06.747 18:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:07.005 [2024-12-09 18:11:29.873224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.005 [2024-12-09 18:11:29.873289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.005 [2024-12-09 18:11:29.873313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.005 [2024-12-09 18:11:29.873334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.005 [2024-12-09 18:11:29.873355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.873981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.874985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.875004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.875024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.875044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 [2024-12-09 18:11:29.875064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418e00 is same with the state(6) to be set 00:22:07.006 18:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:10.286 18:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:10.286 00:22:10.286 18:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:10.544 18:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:13.820 18:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:13.820 [2024-12-09 18:11:36.813991] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.820 18:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:15.192 18:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:15.192 [2024-12-09 18:11:38.153399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.192 [2024-12-09 18:11:38.153977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.193 [2024-12-09 18:11:38.153988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.193 [2024-12-09 18:11:38.154000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.193 [2024-12-09 18:11:38.154011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.193 [2024-12-09 18:11:38.154037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.193 [2024-12-09 18:11:38.154049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.193 [2024-12-09 18:11:38.154060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.193 [2024-12-09 18:11:38.154071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.193 [2024-12-09 18:11:38.154082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.193 [2024-12-09 18:11:38.154093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.193 [2024-12-09 18:11:38.154105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.193 [2024-12-09 18:11:38.154116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.193 [2024-12-09 18:11:38.154130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22deee0 is same with the state(6) to be set 00:22:15.193 18:11:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1539719 00:22:21.754 { 00:22:21.754 "results": [ 00:22:21.754 { 00:22:21.754 "job": "NVMe0n1", 00:22:21.754 "core_mask": "0x1", 00:22:21.754 "workload": "verify", 00:22:21.754 "status": "finished", 00:22:21.754 "verify_range": { 00:22:21.754 "start": 0, 00:22:21.754 "length": 16384 00:22:21.754 }, 00:22:21.754 "queue_depth": 128, 00:22:21.754 "io_size": 4096, 00:22:21.754 "runtime": 15.002339, 00:22:21.754 "iops": 8336.233436666109, 00:22:21.754 "mibps": 32.56341186197699, 00:22:21.754 "io_failed": 10133, 00:22:21.754 "io_timeout": 0, 00:22:21.754 "avg_latency_us": 14175.888038063806, 00:22:21.754 "min_latency_us": 570.4059259259259, 00:22:21.754 "max_latency_us": 23204.59851851852 00:22:21.754 } 00:22:21.754 ], 00:22:21.754 "core_count": 1 00:22:21.754 } 00:22:21.754 18:11:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1539583 00:22:21.754 18:11:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1539583 ']' 00:22:21.754 18:11:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1539583 00:22:21.754 18:11:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:21.754 18:11:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.754 18:11:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1539583 00:22:21.754 18:11:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:21.754 18:11:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:21.754 18:11:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1539583' 00:22:21.754 killing process with pid 1539583 00:22:21.754 18:11:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1539583 00:22:21.754 18:11:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1539583 00:22:21.754 18:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:21.754 [2024-12-09 18:11:27.489060] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:22:21.754 [2024-12-09 18:11:27.489141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1539583 ] 00:22:21.754 [2024-12-09 18:11:27.562110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.754 [2024-12-09 18:11:27.622421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.754 Running I/O for 15 seconds... 00:22:21.754 8183.00 IOPS, 31.96 MiB/s [2024-12-09T17:11:44.795Z] [2024-12-09 18:11:29.875387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.754 [2024-12-09 18:11:29.875426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.754 [2024-12-09 18:11:29.875453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.754 [2024-12-09 18:11:29.875469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.754 [2024-12-09 18:11:29.875486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.754 [2024-12-09 18:11:29.875501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.754 [2024-12-09 18:11:29.875517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.754 [2024-12-09 18:11:29.875538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.754 [2024-12-09 18:11:29.875565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.754 [2024-12-09 18:11:29.875580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.754 [2024-12-09 18:11:29.875597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.754 [2024-12-09 18:11:29.875611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.754 [2024-12-09 18:11:29.875627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.754 [2024-12-09 18:11:29.875641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.754 [2024-12-09 18:11:29.875656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.754 [2024-12-09 18:11:29.875671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.754 [2024-12-09 18:11:29.875687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.754 [2024-12-09 18:11:29.875701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.754 [2024-12-09 18:11:29.875716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.754 [2024-12-09 18:11:29.875730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.754 [2024-12-09 18:11:29.875745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.754 [2024-12-09 18:11:29.875759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.754 [2024-12-09 18:11:29.875782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.754 [2024-12-09 18:11:29.875797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.754 [2024-12-09 18:11:29.875813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.754 [2024-12-09 18:11:29.875846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.875862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.755 [2024-12-09 18:11:29.875875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.875890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.755 [2024-12-09 18:11:29.875919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.875933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.755 [2024-12-09 18:11:29.875947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.875960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.755 [2024-12-09 18:11:29.875974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.875989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.755 [2024-12-09 18:11:29.876002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.755 [2024-12-09 18:11:29.876029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.755 [2024-12-09 18:11:29.876057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.755 [2024-12-09 18:11:29.876807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.755 [2024-12-09 18:11:29.876860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.876971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.876990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.877004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.877019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.877032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.877046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.877059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.877074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.877087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.755 [2024-12-09 18:11:29.877102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.755 [2024-12-09 18:11:29.877115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.877979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.877993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.878008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.878021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.878036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.878050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.878065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.878079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.878093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.878107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.878122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.878139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.878155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.878169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.878184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.878199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.878214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.878228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.878243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.878257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.878272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.878286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.878301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.878314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.878329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.756 [2024-12-09 18:11:29.878343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.756 [2024-12-09 18:11:29.878358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.757 [2024-12-09 18:11:29.878372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.878401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.878418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76336 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.878432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.878460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.757 [2024-12-09 18:11:29.878472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.878484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76344 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.878496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.878509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.757 [2024-12-09 18:11:29.878520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.878553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76352 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.878573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.878589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.757 [2024-12-09 18:11:29.878600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.878611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76360 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.878624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.878637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.757 [2024-12-09 18:11:29.878648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.878659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76368 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.878672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.878686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.757 [2024-12-09 18:11:29.878697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.878708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76376 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.878721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.878734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.757 [2024-12-09 18:11:29.878745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.878756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76384 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.878769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.878782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.757 [2024-12-09 18:11:29.878793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.878804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76392 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.878816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.878830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.757 [2024-12-09 18:11:29.878841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.878852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76400 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.878865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.878884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.757 [2024-12-09 18:11:29.878896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.878907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76408 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.878920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.878932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.757 [2024-12-09 18:11:29.878943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.878957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76416 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.878986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.878999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.757 [2024-12-09 18:11:29.879009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.879020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76424 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.879033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.879045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.757 [2024-12-09 18:11:29.879055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.879066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76432 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.879078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.879090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.757 [2024-12-09 18:11:29.879100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.879110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76440 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.879122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.879134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.757 [2024-12-09 18:11:29.879144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.879155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76448 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.879167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.879179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.757 [2024-12-09 18:11:29.879189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.879199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76456 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.879211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.879224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.757 [2024-12-09 18:11:29.879234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.879245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76464 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.879257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.879273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.757 [2024-12-09 18:11:29.879284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.879295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76472 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.879307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.879320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.757 [2024-12-09 18:11:29.879333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.879344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76480 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.879356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.879369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.757 [2024-12-09 18:11:29.879378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.879389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76488 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.879401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.879414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.757 [2024-12-09 18:11:29.879423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.879434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76496 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.879446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.879458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.757 [2024-12-09 18:11:29.879468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.879478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76504 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.879490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.757 [2024-12-09 18:11:29.879502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.757 [2024-12-09 18:11:29.879512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.757 [2024-12-09 18:11:29.879522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76512 len:8 PRP1 0x0 PRP2 0x0 00:22:21.757 [2024-12-09 18:11:29.879540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:29.879577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.758 [2024-12-09 18:11:29.879588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.758 [2024-12-09 18:11:29.879599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76520 len:8 PRP1 0x0 PRP2 0x0 00:22:21.758 [2024-12-09 18:11:29.879612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:29.879625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.758 [2024-12-09 18:11:29.879636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.758 [2024-12-09 18:11:29.879646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76528 len:8 PRP1 0x0 PRP2 0x0 00:22:21.758 [2024-12-09 18:11:29.879659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:29.879676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.758 [2024-12-09 18:11:29.879687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.758 [2024-12-09 18:11:29.879698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76536 len:8 PRP1 0x0 PRP2 0x0 00:22:21.758 [2024-12-09 18:11:29.879710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:29.879727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.758 [2024-12-09 18:11:29.879738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.758 [2024-12-09 18:11:29.879749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76544 len:8 PRP1 0x0 PRP2 0x0 00:22:21.758 [2024-12-09 18:11:29.879761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:29.879774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.758 [2024-12-09 18:11:29.879790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.758 [2024-12-09 18:11:29.879801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76552 len:8 PRP1 0x0 PRP2 0x0 00:22:21.758 [2024-12-09 18:11:29.879814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:29.879827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.758 [2024-12-09 18:11:29.879844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.758 [2024-12-09 18:11:29.879855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76560 len:8 PRP1 0x0 PRP2 0x0 00:22:21.758 [2024-12-09 18:11:29.879882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:29.879895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.758 [2024-12-09 18:11:29.879906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.758 [2024-12-09 18:11:29.879916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76568 len:8 PRP1 0x0 PRP2 0x0 00:22:21.758 [2024-12-09 18:11:29.879929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:29.879941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.758 [2024-12-09 18:11:29.879951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.758 [2024-12-09 18:11:29.879962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76576 len:8 PRP1 0x0 PRP2 0x0 00:22:21.758 [2024-12-09 18:11:29.879974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:29.879987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.758 [2024-12-09 18:11:29.879996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.758 [2024-12-09 18:11:29.880007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76584 len:8 PRP1 0x0 PRP2 0x0 00:22:21.758 [2024-12-09 18:11:29.880019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:29.880032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.758 [2024-12-09 18:11:29.880042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.758 [2024-12-09 18:11:29.880052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76592 len:8 PRP1 0x0 PRP2 0x0 00:22:21.758 [2024-12-09 18:11:29.880065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:29.880134] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:21.758 [2024-12-09 18:11:29.880188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.758 [2024-12-09 18:11:29.880212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:29.880228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.758 [2024-12-09 18:11:29.880242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:29.880255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.758 [2024-12-09 18:11:29.880269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:29.880283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.758 [2024-12-09 18:11:29.880296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:29.889498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:21.758 [2024-12-09 18:11:29.889609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a7180 (9): Bad file descriptor 00:22:21.758 [2024-12-09 18:11:29.892922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:21.758 [2024-12-09 18:11:29.924417] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:21.758 8014.50 IOPS, 31.31 MiB/s [2024-12-09T17:11:44.799Z] 8143.67 IOPS, 31.81 MiB/s [2024-12-09T17:11:44.799Z] 8225.75 IOPS, 32.13 MiB/s [2024-12-09T17:11:44.799Z] [2024-12-09 18:11:33.521878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.758 [2024-12-09 18:11:33.521941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:33.521969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.758 [2024-12-09 18:11:33.521986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:33.522002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.758 [2024-12-09 18:11:33.522016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:33.522031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.758 [2024-12-09 18:11:33.522045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:33.522060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.758 [2024-12-09 18:11:33.522074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:33.522089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.758 [2024-12-09 18:11:33.522103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:33.522118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.758 [2024-12-09 18:11:33.522131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:33.522158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.758 [2024-12-09 18:11:33.522173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:33.522188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.758 [2024-12-09 18:11:33.522202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.758 [2024-12-09 18:11:33.522216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.522976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.522991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.759 [2024-12-09 18:11:33.523005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.523020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.759 [2024-12-09 18:11:33.523033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.523048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.759 [2024-12-09 18:11:33.523062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.523077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.759 [2024-12-09 18:11:33.523091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.523105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.759 [2024-12-09 18:11:33.523119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.523134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.759 [2024-12-09 18:11:33.523147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.523162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.759 [2024-12-09 18:11:33.523175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.523190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.759 [2024-12-09 18:11:33.523204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.523219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.759 [2024-12-09 18:11:33.523232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.523247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.759 [2024-12-09 18:11:33.523260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.523274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.759 [2024-12-09 18:11:33.523288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.523306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.759 [2024-12-09 18:11:33.523320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.523334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.759 [2024-12-09 18:11:33.523348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.523363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.759 [2024-12-09 18:11:33.523377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.523391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.759 [2024-12-09 18:11:33.523404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.759 [2024-12-09 18:11:33.523419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.759 [2024-12-09 18:11:33.523434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.523448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.523462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.523477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.523490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.523505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.523518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.523537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.523573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.523590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.523605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.523621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.523635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.523650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.523663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.523679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.523697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.523713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.523728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.523743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.523757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.523773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.523787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.523802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.523816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.523831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.523846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.523882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.760 [2024-12-09 18:11:33.523896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.523911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.523924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.523939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.523953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.523968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.523983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.523997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.760 [2024-12-09 18:11:33.524719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.760 [2024-12-09 18:11:33.524733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.524748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.761 [2024-12-09 18:11:33.524762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.524778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.761 [2024-12-09 18:11:33.524792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.524807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.761 [2024-12-09 18:11:33.524821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.524837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.761 [2024-12-09 18:11:33.524866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.524881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.761 [2024-12-09 18:11:33.524899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.524932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.761 [2024-12-09 18:11:33.524948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66568 len:8 PRP1 0x0 PRP2 0x0 00:22:21.761 [2024-12-09 18:11:33.524972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.524991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.761 [2024-12-09 18:11:33.525002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.761 [2024-12-09 18:11:33.525014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66576 len:8 PRP1 0x0 PRP2 0x0 00:22:21.761 [2024-12-09 18:11:33.525037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.525050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.761 [2024-12-09 18:11:33.525060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.761 [2024-12-09 18:11:33.525071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66584 len:8 PRP1 0x0 PRP2 0x0 00:22:21.761 [2024-12-09 18:11:33.525085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.525098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.761 [2024-12-09 18:11:33.525108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.761 [2024-12-09 18:11:33.525119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66592 len:8 PRP1 0x0 PRP2 0x0 00:22:21.761 [2024-12-09 18:11:33.525131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.525144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.761 [2024-12-09 18:11:33.525154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.761 [2024-12-09 18:11:33.525165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66600 len:8 PRP1 0x0 PRP2 0x0 00:22:21.761 [2024-12-09 18:11:33.525177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.525190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.761 [2024-12-09 18:11:33.525201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.761 [2024-12-09 18:11:33.525211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66608 len:8 PRP1 0x0 PRP2 0x0 00:22:21.761 [2024-12-09 18:11:33.525224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.525236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.761 [2024-12-09 18:11:33.525246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.761 [2024-12-09 18:11:33.525257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66616 len:8 PRP1 0x0 PRP2 0x0 00:22:21.761 [2024-12-09 18:11:33.525269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.525282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.761 [2024-12-09 18:11:33.525292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.761 [2024-12-09 18:11:33.525303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66624 len:8 PRP1 0x0 PRP2 0x0 00:22:21.761 [2024-12-09 18:11:33.525319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.525333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.761 [2024-12-09 18:11:33.525343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.761 [2024-12-09 18:11:33.525354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66632 len:8 PRP1 0x0 PRP2 0x0 00:22:21.761 [2024-12-09 18:11:33.525366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.525380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.761 [2024-12-09 18:11:33.525390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.761 [2024-12-09 18:11:33.525401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66640 len:8 PRP1 0x0 PRP2 0x0 00:22:21.761 [2024-12-09 18:11:33.525413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.525425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.761 [2024-12-09 18:11:33.525435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.761 [2024-12-09 18:11:33.525446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66648 len:8 PRP1 0x0 PRP2 0x0 00:22:21.761 [2024-12-09 18:11:33.525458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.525470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.761 [2024-12-09 18:11:33.525480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.761 [2024-12-09 18:11:33.525490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66656 len:8 PRP1 0x0 PRP2 0x0 00:22:21.761 [2024-12-09 18:11:33.525503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.525515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.761 [2024-12-09 18:11:33.525525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.761 [2024-12-09 18:11:33.525535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66664 len:8 PRP1 0x0 PRP2 0x0 00:22:21.761 [2024-12-09 18:11:33.525570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.525586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.761 [2024-12-09 18:11:33.525598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.761 [2024-12-09 18:11:33.525609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66672 len:8 PRP1 0x0 PRP2 0x0 00:22:21.761 [2024-12-09 18:11:33.525622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.525635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.761 [2024-12-09 18:11:33.525645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.761 [2024-12-09 18:11:33.525656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66680 len:8 PRP1 0x0 PRP2 0x0 00:22:21.761 [2024-12-09 18:11:33.525669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.525682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.761 [2024-12-09 18:11:33.525692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.761 [2024-12-09 18:11:33.525711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66688 len:8 PRP1 0x0 PRP2 0x0 00:22:21.761 [2024-12-09 18:11:33.525725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.525738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.761 [2024-12-09 18:11:33.525749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.761 [2024-12-09 18:11:33.525760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66696 len:8 PRP1 0x0 PRP2 0x0 00:22:21.761 [2024-12-09 18:11:33.525772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.525785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.761 [2024-12-09 18:11:33.525796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.761 [2024-12-09 18:11:33.525807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66704 len:8 PRP1 0x0 PRP2 0x0 00:22:21.761 [2024-12-09 18:11:33.525820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.525832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.761 [2024-12-09 18:11:33.525843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.761 [2024-12-09 18:11:33.525869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66712 len:8 PRP1 0x0 PRP2 0x0 00:22:21.761 [2024-12-09 18:11:33.525881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.525894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.761 [2024-12-09 18:11:33.525911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.761 [2024-12-09 18:11:33.525922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66720 len:8 PRP1 0x0 PRP2 0x0 00:22:21.761 [2024-12-09 18:11:33.525934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.525946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.761 [2024-12-09 18:11:33.525956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.761 [2024-12-09 18:11:33.525967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66728 len:8 PRP1 0x0 PRP2 0x0 00:22:21.761 [2024-12-09 18:11:33.525979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.761 [2024-12-09 18:11:33.525992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.761 [2024-12-09 18:11:33.526002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.762 [2024-12-09 18:11:33.526013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66736 len:8 PRP1 0x0 PRP2 0x0 00:22:21.762 [2024-12-09 18:11:33.526025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:33.526037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.762 [2024-12-09 18:11:33.526048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.762 [2024-12-09 18:11:33.526058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66744 len:8 PRP1 0x0 PRP2 0x0 00:22:21.762 [2024-12-09 18:11:33.526070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:33.526086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.762 [2024-12-09 18:11:33.526097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.762 [2024-12-09 18:11:33.526108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66752 len:8 PRP1 0x0 PRP2 0x0 00:22:21.762 [2024-12-09 18:11:33.526120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:33.526132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.762 [2024-12-09 18:11:33.526142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.762 [2024-12-09 18:11:33.526153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66760 len:8 PRP1 0x0 PRP2 0x0 00:22:21.762 [2024-12-09 18:11:33.526165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:33.526178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.762 [2024-12-09 18:11:33.526189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.762 [2024-12-09 18:11:33.526200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66768 len:8 PRP1 0x0 PRP2 0x0 00:22:21.762 [2024-12-09 18:11:33.526212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:33.526224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.762 [2024-12-09 18:11:33.526234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.762 [2024-12-09 18:11:33.526245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66776 len:8 PRP1 0x0 PRP2 0x0 00:22:21.762 [2024-12-09 18:11:33.526257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:33.526270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.762 [2024-12-09 18:11:33.526280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.762 [2024-12-09 18:11:33.526291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66784 len:8 PRP1 0x0 PRP2 0x0 00:22:21.762 [2024-12-09 18:11:33.526303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:33.526316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.762 [2024-12-09 18:11:33.526327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.762 [2024-12-09 18:11:33.526337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66792 len:8 PRP1 0x0 PRP2 0x0 00:22:21.762 [2024-12-09 18:11:33.526349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:33.526361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.762 [2024-12-09 18:11:33.526372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.762 [2024-12-09 18:11:33.526382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66800 len:8 PRP1 0x0 PRP2 0x0 00:22:21.762 [2024-12-09 18:11:33.526394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:33.526407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.762 [2024-12-09 18:11:33.526417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.762 [2024-12-09 18:11:33.526427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66808 len:8 PRP1 0x0 PRP2 0x0 00:22:21.762 [2024-12-09 18:11:33.526443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:33.526456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.762 [2024-12-09 18:11:33.526466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.762 [2024-12-09 18:11:33.526477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66816 len:8 PRP1 0x0 PRP2 0x0 00:22:21.762 [2024-12-09 18:11:33.526489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:33.526579] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:21.762 [2024-12-09 18:11:33.526622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.762 [2024-12-09 18:11:33.526640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:33.526656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.762 [2024-12-09 18:11:33.526677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:33.526692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.762 [2024-12-09 18:11:33.526706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:33.526720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.762 [2024-12-09 18:11:33.526733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:33.526746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:21.762 [2024-12-09 18:11:33.526788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a7180 (9): Bad file descriptor 00:22:21.762 [2024-12-09 18:11:33.530092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:21.762 [2024-12-09 18:11:33.672905] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:21.762 8000.00 IOPS, 31.25 MiB/s [2024-12-09T17:11:44.803Z] 8102.50 IOPS, 31.65 MiB/s [2024-12-09T17:11:44.803Z] 8189.43 IOPS, 31.99 MiB/s [2024-12-09T17:11:44.803Z] 8242.12 IOPS, 32.20 MiB/s [2024-12-09T17:11:44.803Z] 8275.33 IOPS, 32.33 MiB/s [2024-12-09T17:11:44.803Z] [2024-12-09 18:11:38.155591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.762 [2024-12-09 18:11:38.155634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:38.155663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.762 [2024-12-09 18:11:38.155681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:38.155699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.762 [2024-12-09 18:11:38.155713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:38.155729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.762 [2024-12-09 18:11:38.155745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:38.155767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.762 [2024-12-09 18:11:38.155782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:38.155806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.762 [2024-12-09 18:11:38.155821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:38.155852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.762 [2024-12-09 18:11:38.155868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:38.155882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.762 [2024-12-09 18:11:38.155911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:38.155926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.762 [2024-12-09 18:11:38.155940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:38.155954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.762 [2024-12-09 18:11:38.155968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:38.155983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.762 [2024-12-09 18:11:38.155997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:38.156011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.762 [2024-12-09 18:11:38.156025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:38.156040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.762 [2024-12-09 18:11:38.156056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:38.156071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.762 [2024-12-09 18:11:38.156085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:38.156099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.762 [2024-12-09 18:11:38.156112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.762 [2024-12-09 18:11:38.156126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.763 [2024-12-09 18:11:38.156139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.763 [2024-12-09 18:11:38.156178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.763 [2024-12-09 18:11:38.156207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.763 [2024-12-09 18:11:38.156236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.763 [2024-12-09 18:11:38.156264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.763 [2024-12-09 18:11:38.156302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.763 [2024-12-09 18:11:38.156328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.763 [2024-12-09 18:11:38.156356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.763 [2024-12-09 18:11:38.156383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.763 [2024-12-09 18:11:38.156410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.763 [2024-12-09 18:11:38.156438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.763 [2024-12-09 18:11:38.156466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.763 [2024-12-09 18:11:38.156493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:27368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.156521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.156579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.156625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.156654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.156690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:27408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.156720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.156750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:27424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.156779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.156808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.156853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.156883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:27456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.156926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.156954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:27472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.156981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.156999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.157013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.157027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:27488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.157040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.157054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.157067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.157081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.157095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.157127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.157141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.157156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:27520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.157170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.157185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:27528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.157204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.157219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.157233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.763 [2024-12-09 18:11:38.157248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.763 [2024-12-09 18:11:38.157262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.764 [2024-12-09 18:11:38.157291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.764 [2024-12-09 18:11:38.157320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:27568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.764 [2024-12-09 18:11:38.157349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:27576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.764 [2024-12-09 18:11:38.157381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:27584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.764 [2024-12-09 18:11:38.157411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:27592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.764 [2024-12-09 18:11:38.157440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.764 [2024-12-09 18:11:38.157470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.764 [2024-12-09 18:11:38.157498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.764 [2024-12-09 18:11:38.157527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:27624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.764 [2024-12-09 18:11:38.157566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.764 [2024-12-09 18:11:38.157596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:27640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.764 [2024-12-09 18:11:38.157626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.764 [2024-12-09 18:11:38.157655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.764 [2024-12-09 18:11:38.157689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.157718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.157748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.157783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:27104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.157812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.157840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.157869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.157898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.157926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:27144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.157954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.157982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.157997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.158011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.158025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.158038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.158052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.158066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.158080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.158094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.158108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.158125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.158140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.158158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.158173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.158187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.158201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:27216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.158214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.158229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.158242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.158256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.158270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.158285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.158298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.158313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:27248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.158326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.158341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:27256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.158354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.158369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:27264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.158382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.158397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.158410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.158424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.158438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.764 [2024-12-09 18:11:38.158453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.764 [2024-12-09 18:11:38.158466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.158484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.765 [2024-12-09 18:11:38.158498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.158513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.765 [2024-12-09 18:11:38.158526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.158541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.765 [2024-12-09 18:11:38.158582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.158599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:27320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.765 [2024-12-09 18:11:38.158614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.158630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.765 [2024-12-09 18:11:38.158650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.158672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.765 [2024-12-09 18:11:38.158686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.158702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.765 [2024-12-09 18:11:38.158716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.158732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.765 [2024-12-09 18:11:38.158746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.158761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.765 [2024-12-09 18:11:38.158775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.158790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:27664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.158805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.158821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.158835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.158850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.158864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.158894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:27688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.158908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.158927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.158941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.158956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:27704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.158969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.158984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:27712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.158997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.159025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.159054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.159082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:27744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.159111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.159145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.159174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.159202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:27776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.159230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:27784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.159258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:27792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.159290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.159319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:27808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.159348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.159378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:27824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.159406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.159436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:27840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.159464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:27848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.159492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.159521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:27864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.765 [2024-12-09 18:11:38.159579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.765 [2024-12-09 18:11:38.159628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.765 [2024-12-09 18:11:38.159640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27872 len:8 PRP1 0x0 PRP2 0x0 00:22:21.765 [2024-12-09 18:11:38.159654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159722] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:21.765 [2024-12-09 18:11:38.159760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.765 [2024-12-09 18:11:38.159778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.765 [2024-12-09 18:11:38.159812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.765 [2024-12-09 18:11:38.159840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.765 [2024-12-09 18:11:38.159854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.765 [2024-12-09 18:11:38.159867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.766 [2024-12-09 18:11:38.159880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:21.766 [2024-12-09 18:11:38.159922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a7180 (9): Bad file descriptor 00:22:21.766 [2024-12-09 18:11:38.163282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:21.766 [2024-12-09 18:11:38.234645] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:21.766 8234.60 IOPS, 32.17 MiB/s [2024-12-09T17:11:44.807Z] 8266.64 IOPS, 32.29 MiB/s [2024-12-09T17:11:44.807Z] 8287.83 IOPS, 32.37 MiB/s [2024-12-09T17:11:44.807Z] 8304.77 IOPS, 32.44 MiB/s [2024-12-09T17:11:44.807Z] 8326.86 IOPS, 32.53 MiB/s 00:22:21.766 Latency(us) 00:22:21.766 [2024-12-09T17:11:44.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.766 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:21.766 Verification LBA range: start 0x0 length 0x4000 00:22:21.766 NVMe0n1 : 15.00 8336.23 32.56 675.43 0.00 14175.89 570.41 23204.60 00:22:21.766 [2024-12-09T17:11:44.807Z] =================================================================================================================== 00:22:21.766 [2024-12-09T17:11:44.807Z] Total : 8336.23 32.56 675.43 0.00 14175.89 570.41 23204.60 00:22:21.766 Received shutdown signal, test time was about 15.000000 seconds 00:22:21.766 00:22:21.766 Latency(us) 00:22:21.766 [2024-12-09T17:11:44.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.766 [2024-12-09T17:11:44.807Z] =================================================================================================================== 00:22:21.766 [2024-12-09T17:11:44.807Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.766 18:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:21.766 18:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:21.766 18:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:21.766 18:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1541557 00:22:21.766 18:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:21.766 18:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1541557 /var/tmp/bdevperf.sock 00:22:21.766 18:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1541557 ']' 00:22:21.766 18:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:21.766 18:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:21.766 18:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:21.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:21.766 18:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:21.766 18:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:21.766 18:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:21.766 18:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:21.766 18:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:21.766 [2024-12-09 18:11:44.558223] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:21.766 18:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:22.023 [2024-12-09 18:11:44.822958] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:22.023 18:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:22.587 NVMe0n1 00:22:22.587 18:11:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:22.845 00:22:22.845 18:11:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:23.409 00:22:23.409 18:11:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:23.409 18:11:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:23.667 18:11:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:23.924 18:11:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:27.201 18:11:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:27.201 18:11:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:27.201 18:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1542229 00:22:27.201 18:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:27.201 18:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1542229 00:22:28.200 { 00:22:28.200 "results": [ 00:22:28.200 { 00:22:28.200 "job": "NVMe0n1", 00:22:28.200 "core_mask": "0x1", 00:22:28.200 "workload": "verify", 00:22:28.200 "status": "finished", 00:22:28.200 "verify_range": { 00:22:28.200 "start": 0, 00:22:28.200 "length": 16384 00:22:28.200 }, 00:22:28.200 "queue_depth": 128, 00:22:28.200 "io_size": 4096, 00:22:28.200 "runtime": 1.011814, 00:22:28.200 "iops": 8415.578357287011, 00:22:28.200 "mibps": 32.87335295815239, 00:22:28.200 "io_failed": 0, 00:22:28.200 "io_timeout": 0, 00:22:28.200 "avg_latency_us": 15144.030402818556, 00:22:28.200 "min_latency_us": 1523.1051851851853, 00:22:28.200 "max_latency_us": 15243.188148148149 00:22:28.200 } 00:22:28.200 ], 00:22:28.200 "core_count": 1 00:22:28.200 } 00:22:28.200 18:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:28.200 [2024-12-09 18:11:44.061945] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:22:28.200 [2024-12-09 18:11:44.062034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1541557 ] 00:22:28.200 [2024-12-09 18:11:44.130398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.200 [2024-12-09 18:11:44.186242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.200 [2024-12-09 18:11:46.733447] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:28.200 [2024-12-09 18:11:46.733553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:28.200 [2024-12-09 18:11:46.733579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.200 [2024-12-09 18:11:46.733597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:28.200 [2024-12-09 18:11:46.733611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.200 [2024-12-09 18:11:46.733624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:28.200 [2024-12-09 18:11:46.733638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.200 [2024-12-09 18:11:46.733652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:28.200 [2024-12-09 18:11:46.733665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.200 [2024-12-09 18:11:46.733679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:22:28.200 [2024-12-09 18:11:46.733729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:22:28.200 [2024-12-09 18:11:46.733760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf1180 (9): Bad file descriptor 00:22:28.200 [2024-12-09 18:11:46.739938] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:22:28.200 Running I/O for 1 seconds... 00:22:28.200 8384.00 IOPS, 32.75 MiB/s 00:22:28.200 Latency(us) 00:22:28.200 [2024-12-09T17:11:51.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.200 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:28.200 Verification LBA range: start 0x0 length 0x4000 00:22:28.200 NVMe0n1 : 1.01 8415.58 32.87 0.00 0.00 15144.03 1523.11 15243.19 00:22:28.200 [2024-12-09T17:11:51.241Z] =================================================================================================================== 00:22:28.200 [2024-12-09T17:11:51.241Z] Total : 8415.58 32.87 0.00 0.00 15144.03 1523.11 15243.19 00:22:28.200 18:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:28.200 18:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:28.765 18:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:28.765 18:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:28.765 18:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:29.038 18:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:29.603 18:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:32.892 18:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:32.892 18:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:32.892 18:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1541557 00:22:32.892 18:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1541557 ']' 00:22:32.892 18:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1541557 00:22:32.892 18:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:32.892 18:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.892 18:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1541557 00:22:32.892 18:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:32.892 18:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:32.892 18:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1541557' 00:22:32.892 killing process with pid 1541557 00:22:32.892 18:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1541557 00:22:32.892 18:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1541557 00:22:33.149 18:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:33.149 18:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:33.407 rmmod nvme_tcp 00:22:33.407 rmmod nvme_fabrics 00:22:33.407 rmmod nvme_keyring 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1539286 ']' 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1539286 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1539286 ']' 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1539286 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1539286 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1539286' 00:22:33.407 killing process with pid 1539286 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1539286 00:22:33.407 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1539286 00:22:33.666 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:33.666 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:33.666 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:33.666 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:22:33.666 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:22:33.666 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:33.666 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:22:33.666 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:33.666 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:33.666 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.666 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.666 18:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.571 18:11:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:35.571 00:22:35.571 real 0m35.975s 00:22:35.571 user 2m7.226s 00:22:35.571 sys 0m5.959s 00:22:35.571 18:11:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.571 18:11:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:35.571 ************************************ 00:22:35.571 END TEST nvmf_failover 00:22:35.571 ************************************ 00:22:35.830 18:11:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:35.830 18:11:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:35.830 18:11:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.830 18:11:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.830 ************************************ 00:22:35.830 START TEST nvmf_host_discovery 00:22:35.830 ************************************ 00:22:35.830 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:35.830 * Looking for test storage... 00:22:35.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:35.830 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:35.830 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:22:35.830 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:35.830 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:35.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.831 --rc genhtml_branch_coverage=1 00:22:35.831 --rc genhtml_function_coverage=1 00:22:35.831 --rc genhtml_legend=1 00:22:35.831 --rc geninfo_all_blocks=1 00:22:35.831 --rc geninfo_unexecuted_blocks=1 00:22:35.831 00:22:35.831 ' 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:35.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.831 --rc genhtml_branch_coverage=1 00:22:35.831 --rc genhtml_function_coverage=1 00:22:35.831 --rc genhtml_legend=1 00:22:35.831 --rc geninfo_all_blocks=1 00:22:35.831 --rc geninfo_unexecuted_blocks=1 00:22:35.831 00:22:35.831 ' 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:35.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.831 --rc genhtml_branch_coverage=1 00:22:35.831 --rc genhtml_function_coverage=1 00:22:35.831 --rc genhtml_legend=1 00:22:35.831 --rc geninfo_all_blocks=1 00:22:35.831 --rc geninfo_unexecuted_blocks=1 00:22:35.831 00:22:35.831 ' 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:35.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.831 --rc genhtml_branch_coverage=1 00:22:35.831 --rc genhtml_function_coverage=1 00:22:35.831 --rc genhtml_legend=1 00:22:35.831 --rc geninfo_all_blocks=1 00:22:35.831 --rc geninfo_unexecuted_blocks=1 00:22:35.831 00:22:35.831 ' 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:35.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.831 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:35.832 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:35.832 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:35.832 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.832 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.832 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.832 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:35.832 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:35.832 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:22:35.832 18:11:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:38.363 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:38.363 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:38.363 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:38.364 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:38.364 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:38.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:22:38.364 00:22:38.364 --- 10.0.0.2 ping statistics --- 00:22:38.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.364 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:22:38.364 00:22:38.364 --- 10.0.0.1 ping statistics --- 00:22:38.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.364 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1545039 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1545039 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1545039 ']' 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:38.364 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.364 [2024-12-09 18:12:01.247159] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:22:38.364 [2024-12-09 18:12:01.247233] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.364 [2024-12-09 18:12:01.320455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.364 [2024-12-09 18:12:01.375912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.364 [2024-12-09 18:12:01.375972] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.364 [2024-12-09 18:12:01.375995] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.364 [2024-12-09 18:12:01.376007] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.364 [2024-12-09 18:12:01.376016] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.364 [2024-12-09 18:12:01.376654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.622 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:38.622 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:38.622 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:38.622 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:38.622 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.622 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.622 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:38.622 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.622 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.622 [2024-12-09 18:12:01.521369] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.622 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.622 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:38.622 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.622 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.622 [2024-12-09 18:12:01.529593] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:38.622 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.622 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:38.623 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.623 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.623 null0 00:22:38.623 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.623 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:38.623 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.623 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.623 null1 00:22:38.623 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.623 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:38.623 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.623 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.623 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.623 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1545168 00:22:38.623 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:38.623 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1545168 /tmp/host.sock 00:22:38.623 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1545168 ']' 00:22:38.623 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:22:38.623 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:38.623 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:38.623 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:38.623 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:38.623 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.623 [2024-12-09 18:12:01.606805] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:22:38.623 [2024-12-09 18:12:01.606915] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1545168 ] 00:22:38.881 [2024-12-09 18:12:01.676945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.881 [2024-12-09 18:12:01.736663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:38.881 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.139 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:39.139 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:39.139 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.139 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.139 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.139 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:39.139 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:39.139 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.139 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:39.139 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.139 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:39.139 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:39.139 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.139 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:39.139 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:39.139 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.139 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:39.139 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.139 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.139 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:39.139 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:39.139 18:12:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.139 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:39.139 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:39.139 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.139 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.139 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.139 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:39.139 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:39.139 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.140 [2024-12-09 18:12:02.107128] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:39.140 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:22:39.400 18:12:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:39.968 [2024-12-09 18:12:02.909186] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:39.968 [2024-12-09 18:12:02.909211] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:39.968 [2024-12-09 18:12:02.909235] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:39.968 [2024-12-09 18:12:02.997496] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:40.228 [2024-12-09 18:12:03.179575] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:22:40.228 [2024-12-09 18:12:03.180622] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb08aa0:1 started. 00:22:40.228 [2024-12-09 18:12:03.182406] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:40.228 [2024-12-09 18:12:03.182427] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:40.228 [2024-12-09 18:12:03.187355] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb08aa0 was disconnected and freed. delete nvme_qpair. 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:40.487 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:40.488 [2024-12-09 18:12:03.473406] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xad7230:1 started. 00:22:40.488 [2024-12-09 18:12:03.478017] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xad7230 was disconnected and freed. delete nvme_qpair. 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.488 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.747 [2024-12-09 18:12:03.559622] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:40.747 [2024-12-09 18:12:03.560208] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:40.747 [2024-12-09 18:12:03.560256] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.747 [2024-12-09 18:12:03.646911] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:40.747 18:12:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:41.007 [2024-12-09 18:12:03.915503] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:22:41.007 [2024-12-09 18:12:03.915597] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:41.007 [2024-12-09 18:12:03.915616] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:41.007 [2024-12-09 18:12:03.915633] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:41.948 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:41.948 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:41.948 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:41.948 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:41.948 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:41.948 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.948 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.948 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:41.948 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:41.948 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.948 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:41.948 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:41.948 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:41.948 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:41.948 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:41.948 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:41.948 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:41.948 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:41.948 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:41.948 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.949 [2024-12-09 18:12:04.791812] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:41.949 [2024-12-09 18:12:04.791872] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:41.949 [2024-12-09 18:12:04.793755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.949 [2024-12-09 18:12:04.793787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.949 [2024-12-09 18:12:04.793826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.949 [2024-12-09 18:12:04.793841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.949 [2024-12-09 18:12:04.793855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.949 [2024-12-09 18:12:04.793869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.949 [2024-12-09 18:12:04.793883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.949 [2024-12-09 18:12:04.793896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.949 [2024-12-09 18:12:04.793910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad9050 is same with the state(6) to be set 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:41.949 [2024-12-09 18:12:04.803745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad9050 (9): Bad file descriptor 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.949 [2024-12-09 18:12:04.813786] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:41.949 [2024-12-09 18:12:04.813810] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:41.949 [2024-12-09 18:12:04.813841] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:41.949 [2024-12-09 18:12:04.813851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:41.949 [2024-12-09 18:12:04.813883] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:41.949 [2024-12-09 18:12:04.814056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:41.949 [2024-12-09 18:12:04.814085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9050 with addr=10.0.0.2, port=4420 00:22:41.949 [2024-12-09 18:12:04.814107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad9050 is same with the state(6) to be set 00:22:41.949 [2024-12-09 18:12:04.814131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad9050 (9): Bad file descriptor 00:22:41.949 [2024-12-09 18:12:04.814179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:41.949 [2024-12-09 18:12:04.814198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:41.949 [2024-12-09 18:12:04.814213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:41.949 [2024-12-09 18:12:04.814226] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:41.949 [2024-12-09 18:12:04.814236] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:41.949 [2024-12-09 18:12:04.814245] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:41.949 [2024-12-09 18:12:04.823915] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:41.949 [2024-12-09 18:12:04.823934] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:41.949 [2024-12-09 18:12:04.823943] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:41.949 [2024-12-09 18:12:04.823949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:41.949 [2024-12-09 18:12:04.823987] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:41.949 [2024-12-09 18:12:04.824214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:41.949 [2024-12-09 18:12:04.824241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9050 with addr=10.0.0.2, port=4420 00:22:41.949 [2024-12-09 18:12:04.824257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad9050 is same with the state(6) to be set 00:22:41.949 [2024-12-09 18:12:04.824279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad9050 (9): Bad file descriptor 00:22:41.949 [2024-12-09 18:12:04.824311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:41.949 [2024-12-09 18:12:04.824328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:41.949 [2024-12-09 18:12:04.824341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:41.949 [2024-12-09 18:12:04.824353] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:41.949 [2024-12-09 18:12:04.824361] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:41.949 [2024-12-09 18:12:04.824369] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:41.949 [2024-12-09 18:12:04.834020] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:41.949 [2024-12-09 18:12:04.834040] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:41.949 [2024-12-09 18:12:04.834048] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:41.949 [2024-12-09 18:12:04.834055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:41.949 [2024-12-09 18:12:04.834093] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:41.949 [2024-12-09 18:12:04.834323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:41.949 [2024-12-09 18:12:04.834356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9050 with addr=10.0.0.2, port=4420 00:22:41.949 [2024-12-09 18:12:04.834372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad9050 is same with the state(6) to be set 00:22:41.949 [2024-12-09 18:12:04.834394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad9050 (9): Bad file descriptor 00:22:41.949 [2024-12-09 18:12:04.834428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:41.949 [2024-12-09 18:12:04.834445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:41.949 [2024-12-09 18:12:04.834458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:41.949 [2024-12-09 18:12:04.834470] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:41.949 [2024-12-09 18:12:04.834478] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:41.949 [2024-12-09 18:12:04.834486] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:41.949 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.949 [2024-12-09 18:12:04.844128] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:41.949 [2024-12-09 18:12:04.844151] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:41.949 [2024-12-09 18:12:04.844159] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:41.950 [2024-12-09 18:12:04.844166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:41.950 [2024-12-09 18:12:04.844205] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.950 [2024-12-09 18:12:04.844401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.950 [2024-12-09 18:12:04.844430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9050 with addr=10.0.0.2, port=4420 00:22:41.950 [2024-12-09 18:12:04.844446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad9050 is same with the state(6) to be set 00:22:41.950 [2024-12-09 18:12:04.844468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad9050 (9): Bad file descriptor 00:22:41.950 [2024-12-09 18:12:04.844499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:41.950 [2024-12-09 18:12:04.844516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:41.950 [2024-12-09 18:12:04.844535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:41.950 [2024-12-09 18:12:04.844557] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:41.950 [2024-12-09 18:12:04.844569] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:41.950 [2024-12-09 18:12:04.844579] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:41.950 [2024-12-09 18:12:04.854238] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:41.950 [2024-12-09 18:12:04.854261] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:41.950 [2024-12-09 18:12:04.854270] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:41.950 [2024-12-09 18:12:04.854277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:41.950 [2024-12-09 18:12:04.854315] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:41.950 [2024-12-09 18:12:04.854445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:41.950 [2024-12-09 18:12:04.854473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9050 with addr=10.0.0.2, port=4420 00:22:41.950 [2024-12-09 18:12:04.854489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad9050 is same with the state(6) to be set 00:22:41.950 [2024-12-09 18:12:04.854510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad9050 (9): Bad file descriptor 00:22:41.950 [2024-12-09 18:12:04.854530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:41.950 [2024-12-09 18:12:04.854543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:41.950 [2024-12-09 18:12:04.854566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:41.950 [2024-12-09 18:12:04.854578] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:41.950 [2024-12-09 18:12:04.854587] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:41.950 [2024-12-09 18:12:04.854594] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:41.950 [2024-12-09 18:12:04.864356] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:41.950 [2024-12-09 18:12:04.864377] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:41.950 [2024-12-09 18:12:04.864386] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:41.950 [2024-12-09 18:12:04.864393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:41.950 [2024-12-09 18:12:04.864431] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:41.950 [2024-12-09 18:12:04.864534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:41.950 [2024-12-09 18:12:04.864583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9050 with addr=10.0.0.2, port=4420 00:22:41.950 [2024-12-09 18:12:04.864600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad9050 is same with the state(6) to be set 00:22:41.950 [2024-12-09 18:12:04.864622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad9050 (9): Bad file descriptor 00:22:41.950 [2024-12-09 18:12:04.864648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:41.950 [2024-12-09 18:12:04.864662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:41.950 [2024-12-09 18:12:04.864675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:41.950 [2024-12-09 18:12:04.864687] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:41.950 [2024-12-09 18:12:04.864696] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:41.950 [2024-12-09 18:12:04.864703] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.950 [2024-12-09 18:12:04.874464] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:41.950 [2024-12-09 18:12:04.874483] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:41.950 [2024-12-09 18:12:04.874491] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:41.950 [2024-12-09 18:12:04.874498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:41.950 [2024-12-09 18:12:04.874534] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:41.950 [2024-12-09 18:12:04.874683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:41.950 [2024-12-09 18:12:04.874711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad9050 with addr=10.0.0.2, port=4420 00:22:41.950 [2024-12-09 18:12:04.874726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad9050 is same with the state(6) to be set 00:22:41.950 [2024-12-09 18:12:04.874747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad9050 (9): Bad file descriptor 00:22:41.950 [2024-12-09 18:12:04.874767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:41.950 [2024-12-09 18:12:04.874779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:41.950 [2024-12-09 18:12:04.874792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:41.950 [2024-12-09 18:12:04.874804] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:41.950 [2024-12-09 18:12:04.874812] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:41.950 [2024-12-09 18:12:04.874819] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:41.950 [2024-12-09 18:12:04.880980] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:41.950 [2024-12-09 18:12:04.881024] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.950 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:41.951 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:41.951 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:41.951 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:41.951 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:41.951 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.951 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.209 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.209 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:42.210 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:42.210 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:42.210 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:42.210 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:42.210 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:42.210 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:42.210 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:42.210 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.210 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.210 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:42.210 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:42.210 18:12:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.210 18:12:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.587 [2024-12-09 18:12:06.198702] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:43.587 [2024-12-09 18:12:06.198729] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:43.587 [2024-12-09 18:12:06.198753] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:43.587 [2024-12-09 18:12:06.287013] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:43.587 [2024-12-09 18:12:06.594462] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:22:43.587 [2024-12-09 18:12:06.595306] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xc3a550:1 started. 00:22:43.587 [2024-12-09 18:12:06.597438] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:43.587 [2024-12-09 18:12:06.597480] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.587 [2024-12-09 18:12:06.606688] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xc3a550 was disconnected and freed. delete nvme_qpair. 00:22:43.587 request: 00:22:43.587 { 00:22:43.587 "name": "nvme", 00:22:43.587 "trtype": "tcp", 00:22:43.587 "traddr": "10.0.0.2", 00:22:43.587 "adrfam": "ipv4", 00:22:43.587 "trsvcid": "8009", 00:22:43.587 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:43.587 "wait_for_attach": true, 00:22:43.587 "method": "bdev_nvme_start_discovery", 00:22:43.587 "req_id": 1 00:22:43.587 } 00:22:43.587 Got JSON-RPC error response 00:22:43.587 response: 00:22:43.587 { 00:22:43.587 "code": -17, 00:22:43.587 "message": "File exists" 00:22:43.587 } 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:43.587 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.846 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:43.846 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:43.846 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.846 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.846 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:43.846 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.846 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:43.846 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:43.846 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.846 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:43.846 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.846 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:43.846 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.846 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:43.846 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.846 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:43.846 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.846 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.846 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.846 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.846 request: 00:22:43.846 { 00:22:43.846 "name": "nvme_second", 00:22:43.846 "trtype": "tcp", 00:22:43.846 "traddr": "10.0.0.2", 00:22:43.846 "adrfam": "ipv4", 00:22:43.846 "trsvcid": "8009", 00:22:43.847 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:43.847 "wait_for_attach": true, 00:22:43.847 "method": "bdev_nvme_start_discovery", 00:22:43.847 "req_id": 1 00:22:43.847 } 00:22:43.847 Got JSON-RPC error response 00:22:43.847 response: 00:22:43.847 { 00:22:43.847 "code": -17, 00:22:43.847 "message": "File exists" 00:22:43.847 } 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.847 18:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:44.782 [2024-12-09 18:12:07.792828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:44.782 [2024-12-09 18:12:07.792892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaed050 with addr=10.0.0.2, port=8010 00:22:44.782 [2024-12-09 18:12:07.792927] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:44.782 [2024-12-09 18:12:07.792951] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:44.782 [2024-12-09 18:12:07.792975] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:46.157 [2024-12-09 18:12:08.795320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.157 [2024-12-09 18:12:08.795369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaed050 with addr=10.0.0.2, port=8010 00:22:46.157 [2024-12-09 18:12:08.795398] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:46.157 [2024-12-09 18:12:08.795412] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:46.157 [2024-12-09 18:12:08.795424] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:47.093 [2024-12-09 18:12:09.797468] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:47.093 request: 00:22:47.093 { 00:22:47.093 "name": "nvme_second", 00:22:47.093 "trtype": "tcp", 00:22:47.093 "traddr": "10.0.0.2", 00:22:47.093 "adrfam": "ipv4", 00:22:47.093 "trsvcid": "8010", 00:22:47.093 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:47.093 "wait_for_attach": false, 00:22:47.093 "attach_timeout_ms": 3000, 00:22:47.093 "method": "bdev_nvme_start_discovery", 00:22:47.093 "req_id": 1 00:22:47.093 } 00:22:47.093 Got JSON-RPC error response 00:22:47.093 response: 00:22:47.093 { 00:22:47.093 "code": -110, 00:22:47.093 "message": "Connection timed out" 00:22:47.093 } 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1545168 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:47.093 rmmod nvme_tcp 00:22:47.093 rmmod nvme_fabrics 00:22:47.093 rmmod nvme_keyring 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1545039 ']' 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1545039 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1545039 ']' 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1545039 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1545039 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1545039' 00:22:47.093 killing process with pid 1545039 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1545039 00:22:47.093 18:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1545039 00:22:47.352 18:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:47.352 18:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:47.352 18:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:47.352 18:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:22:47.352 18:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:22:47.352 18:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:47.352 18:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:22:47.352 18:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.352 18:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.352 18:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.352 18:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.352 18:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.267 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:49.267 00:22:49.267 real 0m13.572s 00:22:49.267 user 0m19.424s 00:22:49.267 sys 0m2.942s 00:22:49.267 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:49.267 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.267 ************************************ 00:22:49.267 END TEST nvmf_host_discovery 00:22:49.267 ************************************ 00:22:49.267 18:12:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:49.267 18:12:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:49.267 18:12:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:49.267 18:12:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.267 ************************************ 00:22:49.267 START TEST nvmf_host_multipath_status 00:22:49.267 ************************************ 00:22:49.267 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:49.526 * Looking for test storage... 00:22:49.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:49.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.526 --rc genhtml_branch_coverage=1 00:22:49.526 --rc genhtml_function_coverage=1 00:22:49.526 --rc genhtml_legend=1 00:22:49.526 --rc geninfo_all_blocks=1 00:22:49.526 --rc geninfo_unexecuted_blocks=1 00:22:49.526 00:22:49.526 ' 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:49.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.526 --rc genhtml_branch_coverage=1 00:22:49.526 --rc genhtml_function_coverage=1 00:22:49.526 --rc genhtml_legend=1 00:22:49.526 --rc geninfo_all_blocks=1 00:22:49.526 --rc geninfo_unexecuted_blocks=1 00:22:49.526 00:22:49.526 ' 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:49.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.526 --rc genhtml_branch_coverage=1 00:22:49.526 --rc genhtml_function_coverage=1 00:22:49.526 --rc genhtml_legend=1 00:22:49.526 --rc geninfo_all_blocks=1 00:22:49.526 --rc geninfo_unexecuted_blocks=1 00:22:49.526 00:22:49.526 ' 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:49.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.526 --rc genhtml_branch_coverage=1 00:22:49.526 --rc genhtml_function_coverage=1 00:22:49.526 --rc genhtml_legend=1 00:22:49.526 --rc geninfo_all_blocks=1 00:22:49.526 --rc geninfo_unexecuted_blocks=1 00:22:49.526 00:22:49.526 ' 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.526 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:49.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:22:49.527 18:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:52.060 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:52.060 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:22:52.060 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:52.060 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:52.060 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:52.060 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:52.060 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:52.061 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:52.061 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:52.061 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:52.061 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:52.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:22:52.061 00:22:52.061 --- 10.0.0.2 ping statistics --- 00:22:52.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.061 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:52.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:22:52.061 00:22:52.061 --- 10.0.0.1 ping statistics --- 00:22:52.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.061 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:52.061 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.062 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:52.062 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:52.062 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:52.062 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:52.062 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:52.062 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:52.062 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1548771 00:22:52.062 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:52.062 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1548771 00:22:52.062 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1548771 ']' 00:22:52.062 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.062 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.062 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.062 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.062 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:52.062 [2024-12-09 18:12:14.717497] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:22:52.062 [2024-12-09 18:12:14.717598] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.062 [2024-12-09 18:12:14.789652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:52.062 [2024-12-09 18:12:14.849269] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.062 [2024-12-09 18:12:14.849340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.062 [2024-12-09 18:12:14.849353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.062 [2024-12-09 18:12:14.849364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.062 [2024-12-09 18:12:14.849374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.062 [2024-12-09 18:12:14.853567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.062 [2024-12-09 18:12:14.853578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.062 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.062 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:52.062 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:52.062 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:52.062 18:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:52.062 18:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.062 18:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1548771 00:22:52.062 18:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:52.319 [2024-12-09 18:12:15.306706] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.319 18:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:52.884 Malloc0 00:22:52.884 18:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:53.142 18:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:53.400 18:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:53.658 [2024-12-09 18:12:16.479754] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.658 18:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:53.916 [2024-12-09 18:12:16.752373] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:53.916 18:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1549055 00:22:53.916 18:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:53.916 18:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.916 18:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1549055 /var/tmp/bdevperf.sock 00:22:53.916 18:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1549055 ']' 00:22:53.916 18:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.916 18:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.916 18:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.916 18:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.916 18:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:54.174 18:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.174 18:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:54.174 18:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:54.432 18:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:55.001 Nvme0n1 00:22:55.001 18:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:55.261 Nvme0n1 00:22:55.261 18:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:55.261 18:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:57.168 18:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:57.168 18:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:57.426 18:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:57.684 18:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:59.061 18:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:59.061 18:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:59.061 18:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.061 18:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:59.061 18:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.061 18:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:59.061 18:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.061 18:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:59.319 18:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:59.319 18:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:59.319 18:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.319 18:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:59.577 18:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.577 18:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:59.577 18:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.577 18:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:59.835 18:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.835 18:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:59.835 18:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.835 18:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:00.093 18:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.093 18:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:00.093 18:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.093 18:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:00.351 18:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.351 18:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:00.351 18:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:00.917 18:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:00.917 18:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:02.309 18:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:02.309 18:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:02.309 18:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.309 18:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:02.309 18:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:02.309 18:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:02.309 18:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.309 18:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:02.617 18:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.617 18:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:02.617 18:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.617 18:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:02.893 18:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.893 18:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:02.893 18:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.893 18:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:03.151 18:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.151 18:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:03.151 18:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.151 18:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:03.409 18:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.409 18:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:03.409 18:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.409 18:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:03.666 18:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.666 18:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:03.666 18:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:03.924 18:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:04.182 18:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:05.116 18:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:05.116 18:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:05.116 18:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.116 18:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:05.682 18:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.682 18:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:05.682 18:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.682 18:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:05.682 18:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:05.682 18:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:05.682 18:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.682 18:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:05.940 18:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.940 18:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:05.940 18:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.940 18:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:06.506 18:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:06.506 18:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:06.506 18:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.506 18:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:06.506 18:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:06.506 18:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:06.506 18:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.506 18:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:06.764 18:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:06.764 18:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:06.764 18:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:07.330 18:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:07.330 18:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:08.703 18:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:08.703 18:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:08.703 18:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.703 18:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:08.703 18:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.703 18:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:08.703 18:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.703 18:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:08.961 18:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:08.961 18:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:08.961 18:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.961 18:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:09.220 18:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:09.220 18:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:09.220 18:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:09.220 18:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:09.478 18:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:09.478 18:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:09.478 18:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:09.478 18:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:09.736 18:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:09.736 18:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:09.737 18:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:09.737 18:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:09.994 18:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:09.994 18:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:09.995 18:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:10.560 18:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:10.560 18:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:11.930 18:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:11.930 18:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:11.930 18:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.930 18:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:11.930 18:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:11.930 18:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:11.930 18:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.930 18:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:12.187 18:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:12.187 18:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:12.187 18:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.187 18:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:12.444 18:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:12.444 18:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:12.444 18:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.444 18:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:12.701 18:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:12.701 18:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:12.701 18:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.701 18:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:12.959 18:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:12.959 18:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:12.959 18:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.959 18:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:13.216 18:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:13.216 18:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:13.216 18:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:13.474 18:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:13.732 18:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:15.103 18:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:15.103 18:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:15.103 18:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.103 18:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:15.103 18:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:15.103 18:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:15.103 18:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.103 18:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:15.361 18:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.361 18:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:15.361 18:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.361 18:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:15.619 18:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.619 18:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:15.619 18:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.619 18:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:15.877 18:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.877 18:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:15.877 18:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.877 18:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:16.135 18:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:16.135 18:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:16.135 18:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.135 18:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:16.393 18:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:16.393 18:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:16.651 18:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:16.651 18:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:16.909 18:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:17.475 18:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:18.409 18:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:18.409 18:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:18.409 18:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.409 18:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:18.666 18:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.666 18:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:18.666 18:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.666 18:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:18.924 18:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.924 18:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:18.924 18:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.924 18:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:19.182 18:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.182 18:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:19.182 18:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.182 18:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:19.440 18:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.440 18:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:19.440 18:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.440 18:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:19.698 18:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.698 18:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:19.698 18:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.698 18:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:19.955 18:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.955 18:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:19.955 18:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:20.213 18:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:20.470 18:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:21.844 18:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:21.844 18:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:21.844 18:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.844 18:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:21.844 18:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:21.844 18:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:21.844 18:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.844 18:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:22.102 18:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.102 18:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:22.102 18:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.102 18:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:22.359 18:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.359 18:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:22.359 18:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.359 18:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:22.616 18:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.616 18:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:22.616 18:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.616 18:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:22.874 18:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.874 18:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:22.874 18:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.874 18:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:23.132 18:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.132 18:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:23.132 18:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:23.390 18:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:23.648 18:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:25.022 18:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:25.022 18:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:25.022 18:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.022 18:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:25.022 18:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.022 18:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:25.022 18:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.022 18:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:25.280 18:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.280 18:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:25.280 18:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.280 18:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:25.538 18:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.538 18:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:25.538 18:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.538 18:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:25.796 18:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.796 18:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:25.796 18:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.796 18:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:26.054 18:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.054 18:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:26.054 18:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.054 18:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:26.312 18:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.312 18:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:26.312 18:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:26.880 18:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:26.880 18:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:28.253 18:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:28.253 18:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:28.253 18:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.253 18:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:28.253 18:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.253 18:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:28.253 18:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.253 18:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:28.511 18:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:28.511 18:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:28.511 18:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.511 18:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:28.769 18:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.769 18:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:28.769 18:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.769 18:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:29.027 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.027 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:29.027 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.027 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:29.285 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.285 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:29.285 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.285 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:29.543 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:29.543 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1549055 00:23:29.544 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1549055 ']' 00:23:29.544 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1549055 00:23:29.544 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:29.544 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:29.544 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1549055 00:23:29.802 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:29.802 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:29.802 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1549055' 00:23:29.802 killing process with pid 1549055 00:23:29.802 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1549055 00:23:29.802 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1549055 00:23:29.802 { 00:23:29.802 "results": [ 00:23:29.802 { 00:23:29.802 "job": "Nvme0n1", 00:23:29.802 "core_mask": "0x4", 00:23:29.802 "workload": "verify", 00:23:29.802 "status": "terminated", 00:23:29.802 "verify_range": { 00:23:29.802 "start": 0, 00:23:29.802 "length": 16384 00:23:29.802 }, 00:23:29.802 "queue_depth": 128, 00:23:29.802 "io_size": 4096, 00:23:29.802 "runtime": 34.255055, 00:23:29.802 "iops": 7994.528106873569, 00:23:29.802 "mibps": 31.22862541747488, 00:23:29.802 "io_failed": 0, 00:23:29.802 "io_timeout": 0, 00:23:29.802 "avg_latency_us": 15983.050378923215, 00:23:29.802 "min_latency_us": 637.1555555555556, 00:23:29.802 "max_latency_us": 4026531.84 00:23:29.802 } 00:23:29.802 ], 00:23:29.802 "core_count": 1 00:23:29.802 } 00:23:30.072 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1549055 00:23:30.072 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:30.072 [2024-12-09 18:12:16.818642] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:23:30.072 [2024-12-09 18:12:16.818726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1549055 ] 00:23:30.072 [2024-12-09 18:12:16.885685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.072 [2024-12-09 18:12:16.943538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.072 Running I/O for 90 seconds... 00:23:30.072 8592.00 IOPS, 33.56 MiB/s [2024-12-09T17:12:53.113Z] 8564.00 IOPS, 33.45 MiB/s [2024-12-09T17:12:53.113Z] 8533.33 IOPS, 33.33 MiB/s [2024-12-09T17:12:53.113Z] 8527.50 IOPS, 33.31 MiB/s [2024-12-09T17:12:53.113Z] 8478.00 IOPS, 33.12 MiB/s [2024-12-09T17:12:53.113Z] 8495.00 IOPS, 33.18 MiB/s [2024-12-09T17:12:53.113Z] 8505.86 IOPS, 33.23 MiB/s [2024-12-09T17:12:53.113Z] 8520.25 IOPS, 33.28 MiB/s [2024-12-09T17:12:53.113Z] 8523.56 IOPS, 33.30 MiB/s [2024-12-09T17:12:53.113Z] 8525.70 IOPS, 33.30 MiB/s [2024-12-09T17:12:53.113Z] 8511.00 IOPS, 33.25 MiB/s [2024-12-09T17:12:53.113Z] 8502.58 IOPS, 33.21 MiB/s [2024-12-09T17:12:53.113Z] 8493.69 IOPS, 33.18 MiB/s [2024-12-09T17:12:53.113Z] 8492.00 IOPS, 33.17 MiB/s [2024-12-09T17:12:53.113Z] [2024-12-09 18:12:33.278840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.072 [2024-12-09 18:12:33.278915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:30.072 [2024-12-09 18:12:33.278983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.072 [2024-12-09 18:12:33.279006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:30.072 [2024-12-09 18:12:33.279032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.072 [2024-12-09 18:12:33.279049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:30.072 [2024-12-09 18:12:33.279072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.072 [2024-12-09 18:12:33.279089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:30.072 [2024-12-09 18:12:33.279112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.072 [2024-12-09 18:12:33.279130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:30.072 [2024-12-09 18:12:33.279153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.072 [2024-12-09 18:12:33.279184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:30.072 [2024-12-09 18:12:33.279207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.072 [2024-12-09 18:12:33.279224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:30.072 [2024-12-09 18:12:33.279246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.072 [2024-12-09 18:12:33.279262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:30.072 [2024-12-09 18:12:33.279354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.072 [2024-12-09 18:12:33.279376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:30.072 [2024-12-09 18:12:33.279404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.072 [2024-12-09 18:12:33.279434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.279460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.073 [2024-12-09 18:12:33.279477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.279499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.073 [2024-12-09 18:12:33.279516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.279539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.073 [2024-12-09 18:12:33.279563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.279588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.073 [2024-12-09 18:12:33.279605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.279629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.073 [2024-12-09 18:12:33.279645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.279669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.073 [2024-12-09 18:12:33.279685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.280611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.073 [2024-12-09 18:12:33.280636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.280665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.073 [2024-12-09 18:12:33.280682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.280706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.073 [2024-12-09 18:12:33.280723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.280746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.073 [2024-12-09 18:12:33.280762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.280785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.073 [2024-12-09 18:12:33.280802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.280825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.073 [2024-12-09 18:12:33.280846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.280871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.280887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.280911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.280927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.280949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.280966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.281020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.281058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.281096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.281134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.281173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.281211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.281249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.281288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.281384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.281429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.281470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.281525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.073 [2024-12-09 18:12:33.281578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.281619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.281659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.281699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.281739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.281780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.281821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.281876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.281916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.281960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.281983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.281999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:30.073 [2024-12-09 18:12:33.282023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.073 [2024-12-09 18:12:33.282039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.282062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.282078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.282101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.282117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.282140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.282156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.282179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.282195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.282219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.282234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.282258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.282273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.282312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.282329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.282354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.282370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.282396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.282413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.282447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.282464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.282489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.282505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.282529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.282554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.282581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.282598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.282623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.282640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.282664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.282681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.282705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.282722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.282746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.282762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.282787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.282804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.282828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.282860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.282885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.282901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.282925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.282941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.282964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.282984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.283010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.283027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.283050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.283066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.283091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.283107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.283131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.283146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.283170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.283185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.283209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.283224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.283248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.283263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.283287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.283303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.283327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.283343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.283474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.283510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.283543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.283569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.283598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.283620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.283648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.283665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.283692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.283709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.283736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.283752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.283780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.283796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.283824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.283840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:30.074 [2024-12-09 18:12:33.283883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.074 [2024-12-09 18:12:33.283899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.283925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.283942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.283968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.283984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.284010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.284026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.284052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.284068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.284094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.284110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.284136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.284152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.284182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.284199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.284225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.284241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.284267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.284283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.284309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.284325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.284352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.284367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.284410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.284427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.284454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.284470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.284496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.284513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.284541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.284568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.284597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.284614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.284642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.284658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.284685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.284701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.284742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.284759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.284786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.284803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.284830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.284846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.284873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.075 [2024-12-09 18:12:33.284889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.284917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.075 [2024-12-09 18:12:33.284933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.284960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.284976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.285003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.285020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.285047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.285063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.285090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.285106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.285133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.285150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.285177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.285193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.285220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.285237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.285264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.285285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.285313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.285329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.285356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.285373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.285400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.285416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.285445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.285463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.285491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.285508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.285535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.285560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.285589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.285607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:30.075 [2024-12-09 18:12:33.285634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.075 [2024-12-09 18:12:33.285652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:30.075 8469.60 IOPS, 33.08 MiB/s [2024-12-09T17:12:53.116Z] 7940.25 IOPS, 31.02 MiB/s [2024-12-09T17:12:53.116Z] 7473.18 IOPS, 29.19 MiB/s [2024-12-09T17:12:53.116Z] 7058.00 IOPS, 27.57 MiB/s [2024-12-09T17:12:53.116Z] 6701.84 IOPS, 26.18 MiB/s [2024-12-09T17:12:53.117Z] 6793.15 IOPS, 26.54 MiB/s [2024-12-09T17:12:53.117Z] 6866.62 IOPS, 26.82 MiB/s [2024-12-09T17:12:53.117Z] 6999.27 IOPS, 27.34 MiB/s [2024-12-09T17:12:53.117Z] 7171.87 IOPS, 28.02 MiB/s [2024-12-09T17:12:53.117Z] 7340.00 IOPS, 28.67 MiB/s [2024-12-09T17:12:53.117Z] 7495.80 IOPS, 29.28 MiB/s [2024-12-09T17:12:53.117Z] 7534.42 IOPS, 29.43 MiB/s [2024-12-09T17:12:53.117Z] 7567.19 IOPS, 29.56 MiB/s [2024-12-09T17:12:53.117Z] 7595.61 IOPS, 29.67 MiB/s [2024-12-09T17:12:53.117Z] 7690.62 IOPS, 30.04 MiB/s [2024-12-09T17:12:53.117Z] 7799.80 IOPS, 30.47 MiB/s [2024-12-09T17:12:53.117Z] 7907.58 IOPS, 30.89 MiB/s [2024-12-09T17:12:53.117Z] [2024-12-09 18:12:49.880225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:42280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.880310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.880350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.880369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.880408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.076 [2024-12-09 18:12:49.880427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.880602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.880625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.880649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.880666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.880689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.880706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.880728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.880745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.880767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.880784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.880806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.880823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.880845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.880861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.880884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.880900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.880922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.880939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.880961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.880977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.880999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.881015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.881043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.881060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.881082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.881099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.881121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.881137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.881159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.881175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.881198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.881213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.881235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.881251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.881273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.076 [2024-12-09 18:12:49.881289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.881311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.881327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.881349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.881365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.881387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.881403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.881425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.881441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.881462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.881478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.881500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.881521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.881551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.881569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.881593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.881610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.881631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.881647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.881669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.881685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.881708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.881724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:30.076 [2024-12-09 18:12:49.881746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.076 [2024-12-09 18:12:49.881763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.881785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.881802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.881824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.881840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.881862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.077 [2024-12-09 18:12:49.881879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.883746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.883773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.883802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.883819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.883842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.883864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.883888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.883905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.883927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.077 [2024-12-09 18:12:49.883943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.883965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.077 [2024-12-09 18:12:49.883982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.884004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.077 [2024-12-09 18:12:49.884021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.884043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.077 [2024-12-09 18:12:49.884059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.884081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.077 [2024-12-09 18:12:49.884098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.884120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.884136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.884158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.884173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.884195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.884211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.884233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.884249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.884270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.884286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.884308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.884323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.884350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.884367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.884389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.884405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.884427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.884443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.884709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.884733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.884759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.884777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.884800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.884816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.884838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.884855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.884877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.884893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.884915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.884931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.884953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.884969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.884991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.077 [2024-12-09 18:12:49.885007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.885029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.077 [2024-12-09 18:12:49.885045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.885072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.077 [2024-12-09 18:12:49.885089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.885111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.077 [2024-12-09 18:12:49.885127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.885149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.077 [2024-12-09 18:12:49.885165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.885186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.077 [2024-12-09 18:12:49.885202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.885224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.077 [2024-12-09 18:12:49.885240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.885262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.077 [2024-12-09 18:12:49.885278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:30.077 [2024-12-09 18:12:49.885299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.077 [2024-12-09 18:12:49.885316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.885337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.078 [2024-12-09 18:12:49.885354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.885376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.078 [2024-12-09 18:12:49.885392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.885414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.078 [2024-12-09 18:12:49.885430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.885452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.078 [2024-12-09 18:12:49.885468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.885490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.078 [2024-12-09 18:12:49.885506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.885532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.078 [2024-12-09 18:12:49.885557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.885581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.078 [2024-12-09 18:12:49.885598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.885620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.078 [2024-12-09 18:12:49.885637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.885659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.078 [2024-12-09 18:12:49.885675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.885696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.078 [2024-12-09 18:12:49.885712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.885735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.078 [2024-12-09 18:12:49.885751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.885773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.078 [2024-12-09 18:12:49.885789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.885811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.078 [2024-12-09 18:12:49.885827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.885849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.078 [2024-12-09 18:12:49.885865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.885887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.078 [2024-12-09 18:12:49.885903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.885925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.078 [2024-12-09 18:12:49.885941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.885963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.078 [2024-12-09 18:12:49.885979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.886001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.078 [2024-12-09 18:12:49.886021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.886044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.078 [2024-12-09 18:12:49.886061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.886432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.078 [2024-12-09 18:12:49.886454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.886480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.078 [2024-12-09 18:12:49.886497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.886520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.078 [2024-12-09 18:12:49.886537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.886569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.078 [2024-12-09 18:12:49.886586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.886608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.078 [2024-12-09 18:12:49.886624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.886646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.078 [2024-12-09 18:12:49.886662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.886684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.078 [2024-12-09 18:12:49.886700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.886721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.078 [2024-12-09 18:12:49.886737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.886759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.078 [2024-12-09 18:12:49.886775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.886796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.078 [2024-12-09 18:12:49.886812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.886834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.078 [2024-12-09 18:12:49.886855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.886878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.078 [2024-12-09 18:12:49.886894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.886916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.078 [2024-12-09 18:12:49.886932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.886954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.078 [2024-12-09 18:12:49.886970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.886991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.078 [2024-12-09 18:12:49.887007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.887029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.078 [2024-12-09 18:12:49.887046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.887067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.078 [2024-12-09 18:12:49.887083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.887104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.078 [2024-12-09 18:12:49.887121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.887142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.078 [2024-12-09 18:12:49.887158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.887180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.078 [2024-12-09 18:12:49.887196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:30.078 [2024-12-09 18:12:49.887218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.887235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.887257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.887273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.887294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.079 [2024-12-09 18:12:49.887310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.887340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.079 [2024-12-09 18:12:49.887357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.887380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.887396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.887418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.887435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.887457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.887474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.887496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.887512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.887535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.887561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.888235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.888260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.888287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.888304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.888327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.888344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.888366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.888382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.888405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.079 [2024-12-09 18:12:49.888421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.888444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.079 [2024-12-09 18:12:49.888460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.888488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.079 [2024-12-09 18:12:49.888506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.888528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.079 [2024-12-09 18:12:49.888552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.888579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.079 [2024-12-09 18:12:49.888595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.888617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.079 [2024-12-09 18:12:49.888634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.888655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.079 [2024-12-09 18:12:49.888672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.888694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.079 [2024-12-09 18:12:49.888710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.888732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.079 [2024-12-09 18:12:49.888748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.888770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.079 [2024-12-09 18:12:49.888786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.888808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.888824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.888846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.888863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.888885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.888901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.890285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.079 [2024-12-09 18:12:49.890311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.890339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.079 [2024-12-09 18:12:49.890361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.890385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.079 [2024-12-09 18:12:49.890402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.890424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.890441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.890463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.890479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.890501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.890517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.890538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.890562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.890585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.890601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.890623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.890639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.890661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.079 [2024-12-09 18:12:49.890678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.890700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.079 [2024-12-09 18:12:49.890716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.890737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.890754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.890775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.890791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:30.079 [2024-12-09 18:12:49.890813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.079 [2024-12-09 18:12:49.890833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.890856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.080 [2024-12-09 18:12:49.890873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.890894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.890910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.890931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.890947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.890969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.890985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.891007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.891023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.891044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.891060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.891082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.080 [2024-12-09 18:12:49.891098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.891119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.080 [2024-12-09 18:12:49.891135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.891158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.080 [2024-12-09 18:12:49.891174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.891195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.080 [2024-12-09 18:12:49.891211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.891232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.080 [2024-12-09 18:12:49.891249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.891271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.891288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.892970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.080 [2024-12-09 18:12:49.892995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.893024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.080 [2024-12-09 18:12:49.893041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.893064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.080 [2024-12-09 18:12:49.893080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.893102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.080 [2024-12-09 18:12:49.893118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.893140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.893156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.893178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.893194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.893216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.893232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.893253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.893270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.893291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.893308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.893330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.893346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.893367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.893383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.893405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.893421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.893449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.893465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.893487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.893504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.893526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.893542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.893573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.080 [2024-12-09 18:12:49.893590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.893612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.080 [2024-12-09 18:12:49.893628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.893649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.893665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.893687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.893703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.893725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.893741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.893762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.080 [2024-12-09 18:12:49.893779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.893800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.893816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.893838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.893855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.895353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.895378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.895406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.895429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.895453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.080 [2024-12-09 18:12:49.895469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.895491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.080 [2024-12-09 18:12:49.895507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:30.080 [2024-12-09 18:12:49.895530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.080 [2024-12-09 18:12:49.895555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.895581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.081 [2024-12-09 18:12:49.895597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.895619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.081 [2024-12-09 18:12:49.895635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.895657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.081 [2024-12-09 18:12:49.895673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.895695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.081 [2024-12-09 18:12:49.895711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.895732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.081 [2024-12-09 18:12:49.895749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.895771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.081 [2024-12-09 18:12:49.895786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.895807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.081 [2024-12-09 18:12:49.895823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.895846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.081 [2024-12-09 18:12:49.895862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.895884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.081 [2024-12-09 18:12:49.895904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.895927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.081 [2024-12-09 18:12:49.895943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.895965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.081 [2024-12-09 18:12:49.895981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.896003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.081 [2024-12-09 18:12:49.896019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.896041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.081 [2024-12-09 18:12:49.896057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.896079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.081 [2024-12-09 18:12:49.896095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.896117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.081 [2024-12-09 18:12:49.896133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.896170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.081 [2024-12-09 18:12:49.896188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.896211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.081 [2024-12-09 18:12:49.896227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.896248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.081 [2024-12-09 18:12:49.896264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.896284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.081 [2024-12-09 18:12:49.896300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.896321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.081 [2024-12-09 18:12:49.896336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.896357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.081 [2024-12-09 18:12:49.896372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.896397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.081 [2024-12-09 18:12:49.896413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.896434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.081 [2024-12-09 18:12:49.896450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.896472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.081 [2024-12-09 18:12:49.896488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.896510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.081 [2024-12-09 18:12:49.896542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.896575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.081 [2024-12-09 18:12:49.896599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.896621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.081 [2024-12-09 18:12:49.896637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.896659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.081 [2024-12-09 18:12:49.896676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.896698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.081 [2024-12-09 18:12:49.896714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.896736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.081 [2024-12-09 18:12:49.896753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.896775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.081 [2024-12-09 18:12:49.896792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.898210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.081 [2024-12-09 18:12:49.898235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.898263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.081 [2024-12-09 18:12:49.898281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.898310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.081 [2024-12-09 18:12:49.898327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:30.081 [2024-12-09 18:12:49.898350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.081 [2024-12-09 18:12:49.898366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.898388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.082 [2024-12-09 18:12:49.898405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.898426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.082 [2024-12-09 18:12:49.898442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.898464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.082 [2024-12-09 18:12:49.898480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.898502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.082 [2024-12-09 18:12:49.898518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.898540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:43088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.082 [2024-12-09 18:12:49.898568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.898598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.082 [2024-12-09 18:12:49.898614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.898636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.082 [2024-12-09 18:12:49.898653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.898674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.082 [2024-12-09 18:12:49.898691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.899037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.082 [2024-12-09 18:12:49.899062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.899089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.082 [2024-12-09 18:12:49.899107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.899129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.082 [2024-12-09 18:12:49.899150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.899174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.082 [2024-12-09 18:12:49.899190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.899212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.082 [2024-12-09 18:12:49.899228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.899249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.082 [2024-12-09 18:12:49.899265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.899287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.082 [2024-12-09 18:12:49.899304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.899325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.082 [2024-12-09 18:12:49.899340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.899362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.082 [2024-12-09 18:12:49.899378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.899400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.082 [2024-12-09 18:12:49.899416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.899437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.082 [2024-12-09 18:12:49.899453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.899475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.082 [2024-12-09 18:12:49.899491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.899528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.082 [2024-12-09 18:12:49.899551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.899602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.082 [2024-12-09 18:12:49.899619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.901158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.082 [2024-12-09 18:12:49.901186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.901230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.082 [2024-12-09 18:12:49.901246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.901286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.082 [2024-12-09 18:12:49.901303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.901325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.082 [2024-12-09 18:12:49.901341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.901362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.082 [2024-12-09 18:12:49.901378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.901401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.082 [2024-12-09 18:12:49.901417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.901439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.082 [2024-12-09 18:12:49.901454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.901476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.082 [2024-12-09 18:12:49.901493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.901515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.082 [2024-12-09 18:12:49.901530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.901560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.082 [2024-12-09 18:12:49.901590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.901613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.082 [2024-12-09 18:12:49.901629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.901651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.082 [2024-12-09 18:12:49.901667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.901689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.082 [2024-12-09 18:12:49.901705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.901732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.082 [2024-12-09 18:12:49.901749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.901770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.082 [2024-12-09 18:12:49.901786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.901808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.082 [2024-12-09 18:12:49.901824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.901862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.082 [2024-12-09 18:12:49.901877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:30.082 [2024-12-09 18:12:49.901898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:43664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.901913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.901935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.901950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.901971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.083 [2024-12-09 18:12:49.901986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.902007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.902023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.902044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.902059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.902080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:43520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.902096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.902116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.083 [2024-12-09 18:12:49.902132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.902153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.083 [2024-12-09 18:12:49.902168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.902193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.902209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.902231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.083 [2024-12-09 18:12:49.902246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.902268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.083 [2024-12-09 18:12:49.902283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.902304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.902319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.902341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.902356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.902377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.083 [2024-12-09 18:12:49.902392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.902413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.083 [2024-12-09 18:12:49.902429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.902450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.083 [2024-12-09 18:12:49.902465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.902486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.083 [2024-12-09 18:12:49.902502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.902539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.902564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.904268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.083 [2024-12-09 18:12:49.904294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.904322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.083 [2024-12-09 18:12:49.904340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.904362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:43360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.083 [2024-12-09 18:12:49.904384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.904408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.083 [2024-12-09 18:12:49.904425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.904447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.904463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.904485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.904501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.904523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.904539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.904571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.904594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.904616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.904632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.904654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.904670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.904692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.904708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.905987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.906012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.906039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:43832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.906057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.906079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.906095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.906117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.906141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.906164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.083 [2024-12-09 18:12:49.906180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.906202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.083 [2024-12-09 18:12:49.906218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.906240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.083 [2024-12-09 18:12:49.906256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.906278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.083 [2024-12-09 18:12:49.906294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.906316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.906332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.906354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.906370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.906391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.083 [2024-12-09 18:12:49.906407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:30.083 [2024-12-09 18:12:49.906429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.906460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.906482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.906497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.906518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.906560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.906587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.084 [2024-12-09 18:12:49.906603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.906625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.084 [2024-12-09 18:12:49.906641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.906667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.906684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.906706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.084 [2024-12-09 18:12:49.906722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.906743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.906759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.906781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:43552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.084 [2024-12-09 18:12:49.906798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.906819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.906835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.906871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.084 [2024-12-09 18:12:49.906887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.906908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.906923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.906944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.906959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.906996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.907012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.907034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.907050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.907071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.907087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.907109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.907125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.907152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:43120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.907168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.907190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.084 [2024-12-09 18:12:49.907206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.907228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.084 [2024-12-09 18:12:49.907244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.907266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.907282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.907305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.907321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.907343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.907359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.907381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.907397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.907419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.084 [2024-12-09 18:12:49.907435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.907456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.084 [2024-12-09 18:12:49.907472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.907494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.084 [2024-12-09 18:12:49.907510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.907532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.084 [2024-12-09 18:12:49.907556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.907581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.907597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.907620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.907640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.907663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.907679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.907701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.084 [2024-12-09 18:12:49.907717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.907739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.084 [2024-12-09 18:12:49.907757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.907779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.084 [2024-12-09 18:12:49.907796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.908991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.909017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.909046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.909064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.909087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.084 [2024-12-09 18:12:49.909104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.909127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.084 [2024-12-09 18:12:49.909143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.909165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.084 [2024-12-09 18:12:49.909182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.084 [2024-12-09 18:12:49.909204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.084 [2024-12-09 18:12:49.909221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.909243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.085 [2024-12-09 18:12:49.909259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.909281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.085 [2024-12-09 18:12:49.909303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.909327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.085 [2024-12-09 18:12:49.909344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.909366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.085 [2024-12-09 18:12:49.909383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.910917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.085 [2024-12-09 18:12:49.910942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.910969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.085 [2024-12-09 18:12:49.910987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.085 [2024-12-09 18:12:49.911027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.085 [2024-12-09 18:12:49.911066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.085 [2024-12-09 18:12:49.911105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.085 [2024-12-09 18:12:49.911143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.085 [2024-12-09 18:12:49.911181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.085 [2024-12-09 18:12:49.911219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.085 [2024-12-09 18:12:49.911272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.085 [2024-12-09 18:12:49.911309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.085 [2024-12-09 18:12:49.911351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.085 [2024-12-09 18:12:49.911389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.085 [2024-12-09 18:12:49.911425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.085 [2024-12-09 18:12:49.911478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.085 [2024-12-09 18:12:49.911516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.085 [2024-12-09 18:12:49.911562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.085 [2024-12-09 18:12:49.911603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.085 [2024-12-09 18:12:49.911641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.085 [2024-12-09 18:12:49.911679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.085 [2024-12-09 18:12:49.911717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.085 [2024-12-09 18:12:49.911755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.085 [2024-12-09 18:12:49.911793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.085 [2024-12-09 18:12:49.911836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.085 [2024-12-09 18:12:49.911874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.085 [2024-12-09 18:12:49.911912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.085 [2024-12-09 18:12:49.911949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.911986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.085 [2024-12-09 18:12:49.912002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.912024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.085 [2024-12-09 18:12:49.912039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.912060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.085 [2024-12-09 18:12:49.912075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.912097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.085 [2024-12-09 18:12:49.912112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.912133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.085 [2024-12-09 18:12:49.912149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:30.085 [2024-12-09 18:12:49.912169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.086 [2024-12-09 18:12:49.912185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.912206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.086 [2024-12-09 18:12:49.912221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.912242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.086 [2024-12-09 18:12:49.912258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.912278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.086 [2024-12-09 18:12:49.912298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.912320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.086 [2024-12-09 18:12:49.912336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.912357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.086 [2024-12-09 18:12:49.912373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.912394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:43968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.912410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.912430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.912446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.912467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.912482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.912503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.912519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.914002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.086 [2024-12-09 18:12:49.914028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.914056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.914074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.914096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.914112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.914135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.914151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.914173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.914189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.914211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.914233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.914257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.914273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.914296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.914312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.915783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.915808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.915836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.915854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.915876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.915892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.915915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.915931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.915953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.086 [2024-12-09 18:12:49.915970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.915992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:43704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.086 [2024-12-09 18:12:49.916008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.916030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.086 [2024-12-09 18:12:49.916046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.916068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.916084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.916122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.916139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.916160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.916176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.916202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.916218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.916239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.916255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.916276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.086 [2024-12-09 18:12:49.916292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.916313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.916328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.916350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.086 [2024-12-09 18:12:49.916365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.916386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.916402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.916439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.916455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.916477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.086 [2024-12-09 18:12:49.916493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.916515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.916531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.916561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.086 [2024-12-09 18:12:49.916579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.916601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.086 [2024-12-09 18:12:49.916617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.916639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.086 [2024-12-09 18:12:49.916656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:30.086 [2024-12-09 18:12:49.916682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.087 [2024-12-09 18:12:49.916699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.916721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.087 [2024-12-09 18:12:49.916737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.916758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.087 [2024-12-09 18:12:49.916774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.916795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.087 [2024-12-09 18:12:49.916811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.916832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.087 [2024-12-09 18:12:49.916848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.916870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.087 [2024-12-09 18:12:49.916886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.916908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.087 [2024-12-09 18:12:49.916924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.916945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.087 [2024-12-09 18:12:49.916961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.916983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.087 [2024-12-09 18:12:49.916999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.917020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.087 [2024-12-09 18:12:49.917036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.917058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.087 [2024-12-09 18:12:49.917073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.917095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.087 [2024-12-09 18:12:49.917110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.917137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.087 [2024-12-09 18:12:49.917154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.917176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.087 [2024-12-09 18:12:49.917192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.917213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.087 [2024-12-09 18:12:49.917230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.917251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.087 [2024-12-09 18:12:49.917267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.917289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.087 [2024-12-09 18:12:49.917305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.917326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.087 [2024-12-09 18:12:49.917342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.917364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.087 [2024-12-09 18:12:49.917381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.917403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.087 [2024-12-09 18:12:49.917419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.917456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.087 [2024-12-09 18:12:49.917472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.917494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.087 [2024-12-09 18:12:49.917510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.917555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.087 [2024-12-09 18:12:49.917574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.917598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.087 [2024-12-09 18:12:49.917615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.919089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.087 [2024-12-09 18:12:49.919120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.919149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.087 [2024-12-09 18:12:49.919168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.919190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.087 [2024-12-09 18:12:49.919207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.919229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.087 [2024-12-09 18:12:49.919245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.919268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.087 [2024-12-09 18:12:49.919284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.919306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.087 [2024-12-09 18:12:49.919322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.919343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.087 [2024-12-09 18:12:49.919360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.919383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.087 [2024-12-09 18:12:49.919399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.920704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.087 [2024-12-09 18:12:49.920729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.920757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.087 [2024-12-09 18:12:49.920775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.920797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.087 [2024-12-09 18:12:49.920814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.920835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.087 [2024-12-09 18:12:49.920851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.920873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.087 [2024-12-09 18:12:49.920894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.920918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.087 [2024-12-09 18:12:49.920934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.920956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.087 [2024-12-09 18:12:49.920972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.920994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.087 [2024-12-09 18:12:49.921025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:30.087 [2024-12-09 18:12:49.921047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.921063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.088 [2024-12-09 18:12:49.921100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.088 [2024-12-09 18:12:49.921136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:43704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.921173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.088 [2024-12-09 18:12:49.921210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.088 [2024-12-09 18:12:49.921247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.088 [2024-12-09 18:12:49.921284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.088 [2024-12-09 18:12:49.921336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.088 [2024-12-09 18:12:49.921375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.921419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.921457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.921494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.921532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.921584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.921623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.088 [2024-12-09 18:12:49.921661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.921699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.921737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.921775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.921813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.921865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.921911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.088 [2024-12-09 18:12:49.921963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.921987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.088 [2024-12-09 18:12:49.922004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.922026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.922042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.922064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.922080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.922102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.922117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.922139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.922156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.922177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.922193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.922215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.922232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.922255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.922272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.923131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.923156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.923195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.088 [2024-12-09 18:12:49.923216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.923239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.088 [2024-12-09 18:12:49.923261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.923285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.923302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.923324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.088 [2024-12-09 18:12:49.923341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.923363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.088 [2024-12-09 18:12:49.923379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.923401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.088 [2024-12-09 18:12:49.923418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.923440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.088 [2024-12-09 18:12:49.923457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.923830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.088 [2024-12-09 18:12:49.923855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:30.088 [2024-12-09 18:12:49.923883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.089 [2024-12-09 18:12:49.923901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.923924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.089 [2024-12-09 18:12:49.923941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.923962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.089 [2024-12-09 18:12:49.923979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.924002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.089 [2024-12-09 18:12:49.924018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.924040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.089 [2024-12-09 18:12:49.924056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.924078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.089 [2024-12-09 18:12:49.924114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.924139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.089 [2024-12-09 18:12:49.924155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.924176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.089 [2024-12-09 18:12:49.924191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.924212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.089 [2024-12-09 18:12:49.924228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.924249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.089 [2024-12-09 18:12:49.924264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.924302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.089 [2024-12-09 18:12:49.924318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.924339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.089 [2024-12-09 18:12:49.924355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.924378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.089 [2024-12-09 18:12:49.924393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.924415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.089 [2024-12-09 18:12:49.924431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.924453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.089 [2024-12-09 18:12:49.924468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.924490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.089 [2024-12-09 18:12:49.924506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.924528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.089 [2024-12-09 18:12:49.924551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.924576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.089 [2024-12-09 18:12:49.924593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.924619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.089 [2024-12-09 18:12:49.924636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.924659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.089 [2024-12-09 18:12:49.924675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.924697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:43704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.089 [2024-12-09 18:12:49.924713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.925130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.089 [2024-12-09 18:12:49.925153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.925180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.089 [2024-12-09 18:12:49.925198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.925221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.089 [2024-12-09 18:12:49.925238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.925260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.089 [2024-12-09 18:12:49.925276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.925313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.089 [2024-12-09 18:12:49.925329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.925350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.089 [2024-12-09 18:12:49.925366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.925387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.089 [2024-12-09 18:12:49.925403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.925424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.089 [2024-12-09 18:12:49.925439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.925460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.089 [2024-12-09 18:12:49.925476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.925502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.089 [2024-12-09 18:12:49.925519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.925564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.089 [2024-12-09 18:12:49.925583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.925606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.089 [2024-12-09 18:12:49.925621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.925643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.089 [2024-12-09 18:12:49.925659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.925682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.089 [2024-12-09 18:12:49.925698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.925720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.089 [2024-12-09 18:12:49.925735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.925757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.089 [2024-12-09 18:12:49.925773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.925795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.089 [2024-12-09 18:12:49.925811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.925832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.089 [2024-12-09 18:12:49.925848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.925869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.089 [2024-12-09 18:12:49.925885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:30.089 [2024-12-09 18:12:49.925907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.090 [2024-12-09 18:12:49.925923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.925945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.090 [2024-12-09 18:12:49.925960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.925983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.090 [2024-12-09 18:12:49.926002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.926026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.090 [2024-12-09 18:12:49.926042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.927889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.090 [2024-12-09 18:12:49.927914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.927956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.090 [2024-12-09 18:12:49.927975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.927997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.090 [2024-12-09 18:12:49.928014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.928051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.090 [2024-12-09 18:12:49.928067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.928088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.090 [2024-12-09 18:12:49.928104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.928125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.090 [2024-12-09 18:12:49.928140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.928161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.090 [2024-12-09 18:12:49.928177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.928198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.090 [2024-12-09 18:12:49.928213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.928234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.090 [2024-12-09 18:12:49.928250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.928271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.090 [2024-12-09 18:12:49.928286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.928308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.090 [2024-12-09 18:12:49.928328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.928350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.090 [2024-12-09 18:12:49.928366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.928387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.090 [2024-12-09 18:12:49.928403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.928424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.090 [2024-12-09 18:12:49.928439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.928460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.090 [2024-12-09 18:12:49.928476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.928497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.090 [2024-12-09 18:12:49.928512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.928556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.090 [2024-12-09 18:12:49.928574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.928597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.090 [2024-12-09 18:12:49.928614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.928635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.090 [2024-12-09 18:12:49.928651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.928673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.090 [2024-12-09 18:12:49.928689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.928711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.090 [2024-12-09 18:12:49.928727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.928749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.090 [2024-12-09 18:12:49.928764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.928786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.090 [2024-12-09 18:12:49.928802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.928829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.090 [2024-12-09 18:12:49.928860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.928884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.090 [2024-12-09 18:12:49.928899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.931138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.090 [2024-12-09 18:12:49.931164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.931191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.090 [2024-12-09 18:12:49.931209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.931232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.090 [2024-12-09 18:12:49.931248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.931269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.090 [2024-12-09 18:12:49.931285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.931307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.090 [2024-12-09 18:12:49.931323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.931345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.090 [2024-12-09 18:12:49.931361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:30.090 [2024-12-09 18:12:49.931382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.091 [2024-12-09 18:12:49.931398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.931420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.091 [2024-12-09 18:12:49.931436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.931458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.091 [2024-12-09 18:12:49.931474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.931510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.091 [2024-12-09 18:12:49.931525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.931576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.091 [2024-12-09 18:12:49.931595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.931618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.091 [2024-12-09 18:12:49.931634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.931656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.091 [2024-12-09 18:12:49.931672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.931694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.091 [2024-12-09 18:12:49.931710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.931732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.091 [2024-12-09 18:12:49.931748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.931769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.931785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.931807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.931823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.931845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.931876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.931899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.931914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.931953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.931969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.931991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.932007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.932028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.932044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.932066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.932086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.932110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.091 [2024-12-09 18:12:49.932126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.932147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.091 [2024-12-09 18:12:49.932164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.932186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.091 [2024-12-09 18:12:49.932203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.932224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.932240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.932261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.932277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.932299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.932316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.932337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.932353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.932375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.932391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.932413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.932429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.932451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.932467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.933861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.091 [2024-12-09 18:12:49.933887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.933914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.933937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.933961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.091 [2024-12-09 18:12:49.933978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.934001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.934017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.934039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.934055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.934076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.934093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.934115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.934131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.934152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.091 [2024-12-09 18:12:49.934168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.934190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.091 [2024-12-09 18:12:49.934207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.934229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.934260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.934282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.934298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.934319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.934335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.934372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.091 [2024-12-09 18:12:49.934388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:30.091 [2024-12-09 18:12:49.934410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.934426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.934453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.934470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.934491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.934507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.934530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.934555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.934581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.934598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.934620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.934636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.934658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.934674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.935418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.935444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.935495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.935534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.935584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.935602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.935625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.092 [2024-12-09 18:12:49.935640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.935663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.092 [2024-12-09 18:12:49.935679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.935701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.935717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.935751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.935768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.935790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.935806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.935827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.935843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.935864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.935880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.935902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.935918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.935939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.092 [2024-12-09 18:12:49.935955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.935977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.092 [2024-12-09 18:12:49.935993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.936015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.092 [2024-12-09 18:12:49.936030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.936052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.092 [2024-12-09 18:12:49.936068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.936106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.936121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.936158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.936174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.936196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.092 [2024-12-09 18:12:49.936212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.936233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.092 [2024-12-09 18:12:49.936253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.936276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.092 [2024-12-09 18:12:49.936293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.937586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.092 [2024-12-09 18:12:49.937611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.937639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.937657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.937679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.937696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.937718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.937733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.937755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.937771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.937793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.937809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.937831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-12-09 18:12:49.937861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.937884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.092 [2024-12-09 18:12:49.937900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.937921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.092 [2024-12-09 18:12:49.937937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.937958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.092 [2024-12-09 18:12:49.937973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.937995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.092 [2024-12-09 18:12:49.938018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.938041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.092 [2024-12-09 18:12:49.938072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:30.092 [2024-12-09 18:12:49.938096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.092 [2024-12-09 18:12:49.938112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:30.092 7966.47 IOPS, 31.12 MiB/s [2024-12-09T17:12:53.133Z] 7979.27 IOPS, 31.17 MiB/s [2024-12-09T17:12:53.133Z] 7991.79 IOPS, 31.22 MiB/s [2024-12-09T17:12:53.134Z] Received shutdown signal, test time was about 34.255825 seconds 00:23:30.093 00:23:30.093 Latency(us) 00:23:30.093 [2024-12-09T17:12:53.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.093 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:30.093 Verification LBA range: start 0x0 length 0x4000 00:23:30.093 Nvme0n1 : 34.26 7994.53 31.23 0.00 0.00 15983.05 637.16 4026531.84 00:23:30.093 [2024-12-09T17:12:53.134Z] =================================================================================================================== 00:23:30.093 [2024-12-09T17:12:53.134Z] Total : 7994.53 31.23 0.00 0.00 15983.05 637.16 4026531.84 00:23:30.093 18:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:30.351 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:30.351 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:30.351 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:30.351 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:30.351 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:30.351 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:30.351 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:30.351 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:30.351 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:30.351 rmmod nvme_tcp 00:23:30.351 rmmod nvme_fabrics 00:23:30.351 rmmod nvme_keyring 00:23:30.351 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:30.351 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:30.351 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:30.351 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1548771 ']' 00:23:30.351 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1548771 00:23:30.351 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1548771 ']' 00:23:30.351 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1548771 00:23:30.351 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:30.352 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.352 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1548771 00:23:30.352 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:30.352 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:30.352 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1548771' 00:23:30.352 killing process with pid 1548771 00:23:30.352 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1548771 00:23:30.352 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1548771 00:23:30.609 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:30.609 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:30.609 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:30.609 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:23:30.609 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:23:30.610 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:30.610 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:23:30.610 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:30.610 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:30.610 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.610 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.610 18:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.569 18:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:32.569 00:23:32.569 real 0m43.249s 00:23:32.569 user 2m10.389s 00:23:32.569 sys 0m11.335s 00:23:32.569 18:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:32.569 18:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:32.569 ************************************ 00:23:32.569 END TEST nvmf_host_multipath_status 00:23:32.569 ************************************ 00:23:32.569 18:12:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:32.569 18:12:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:32.569 18:12:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.569 18:12:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.569 ************************************ 00:23:32.569 START TEST nvmf_discovery_remove_ifc 00:23:32.569 ************************************ 00:23:32.569 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:32.829 * Looking for test storage... 00:23:32.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:32.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.829 --rc genhtml_branch_coverage=1 00:23:32.829 --rc genhtml_function_coverage=1 00:23:32.829 --rc genhtml_legend=1 00:23:32.829 --rc geninfo_all_blocks=1 00:23:32.829 --rc geninfo_unexecuted_blocks=1 00:23:32.829 00:23:32.829 ' 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:32.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.829 --rc genhtml_branch_coverage=1 00:23:32.829 --rc genhtml_function_coverage=1 00:23:32.829 --rc genhtml_legend=1 00:23:32.829 --rc geninfo_all_blocks=1 00:23:32.829 --rc geninfo_unexecuted_blocks=1 00:23:32.829 00:23:32.829 ' 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:32.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.829 --rc genhtml_branch_coverage=1 00:23:32.829 --rc genhtml_function_coverage=1 00:23:32.829 --rc genhtml_legend=1 00:23:32.829 --rc geninfo_all_blocks=1 00:23:32.829 --rc geninfo_unexecuted_blocks=1 00:23:32.829 00:23:32.829 ' 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:32.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.829 --rc genhtml_branch_coverage=1 00:23:32.829 --rc genhtml_function_coverage=1 00:23:32.829 --rc genhtml_legend=1 00:23:32.829 --rc geninfo_all_blocks=1 00:23:32.829 --rc geninfo_unexecuted_blocks=1 00:23:32.829 00:23:32.829 ' 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:32.829 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:32.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:23:32.830 18:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:35.363 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:35.363 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:35.363 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:35.363 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.363 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:35.364 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:23:35.364 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:35.364 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:35.364 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:35.364 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:35.364 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.364 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.364 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:35.364 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:35.364 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:35.364 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:35.364 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:35.364 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:35.364 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:35.364 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.364 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:35.364 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:35.364 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:35.364 18:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:35.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:23:35.364 00:23:35.364 --- 10.0.0.2 ping statistics --- 00:23:35.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.364 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:35.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:23:35.364 00:23:35.364 --- 10.0.0.1 ping statistics --- 00:23:35.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.364 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1555408 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1555408 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1555408 ']' 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.364 [2024-12-09 18:12:58.157739] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:23:35.364 [2024-12-09 18:12:58.157828] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.364 [2024-12-09 18:12:58.232341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.364 [2024-12-09 18:12:58.287232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.364 [2024-12-09 18:12:58.287305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.364 [2024-12-09 18:12:58.287343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.364 [2024-12-09 18:12:58.287354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.364 [2024-12-09 18:12:58.287364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.364 [2024-12-09 18:12:58.288066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:35.364 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.623 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.623 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:35.623 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.623 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.623 [2024-12-09 18:12:58.438825] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.623 [2024-12-09 18:12:58.447056] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:35.623 null0 00:23:35.623 [2024-12-09 18:12:58.478954] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.623 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.623 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1555548 00:23:35.623 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:35.623 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1555548 /tmp/host.sock 00:23:35.623 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1555548 ']' 00:23:35.623 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:35.623 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.623 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:35.623 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:35.623 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.623 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.623 [2024-12-09 18:12:58.544368] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:23:35.623 [2024-12-09 18:12:58.544448] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1555548 ] 00:23:35.623 [2024-12-09 18:12:58.609923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.881 [2024-12-09 18:12:58.667046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.881 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.881 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:35.881 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:35.881 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:35.881 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.881 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.881 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.881 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:35.881 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.881 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.881 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.881 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:35.881 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.881 18:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.254 [2024-12-09 18:12:59.926659] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:37.254 [2024-12-09 18:12:59.926685] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:37.254 [2024-12-09 18:12:59.926709] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:37.254 [2024-12-09 18:13:00.014043] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:37.254 [2024-12-09 18:13:00.074851] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:37.254 [2024-12-09 18:13:00.076082] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x21bf650:1 started. 00:23:37.254 [2024-12-09 18:13:00.077807] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:37.254 [2024-12-09 18:13:00.077882] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:37.254 [2024-12-09 18:13:00.077938] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:37.254 [2024-12-09 18:13:00.077963] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:37.254 [2024-12-09 18:13:00.078009] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:37.254 [2024-12-09 18:13:00.084844] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x21bf650 was disconnected and freed. delete nvme_qpair. 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:37.254 18:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:38.187 18:13:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:38.187 18:13:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.187 18:13:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.187 18:13:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:38.187 18:13:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:38.187 18:13:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:38.187 18:13:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:38.445 18:13:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.445 18:13:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:38.445 18:13:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:39.378 18:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:39.378 18:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.378 18:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:39.378 18:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:39.378 18:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.378 18:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:39.378 18:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:39.378 18:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.378 18:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:39.378 18:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:40.311 18:13:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:40.311 18:13:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.311 18:13:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:40.311 18:13:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.311 18:13:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:40.311 18:13:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:40.311 18:13:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:40.311 18:13:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.311 18:13:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:40.311 18:13:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:41.684 18:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:41.684 18:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.684 18:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:41.685 18:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.685 18:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:41.685 18:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:41.685 18:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:41.685 18:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.685 18:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:41.685 18:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:42.618 18:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:42.618 18:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:42.618 18:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:42.618 18:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.618 18:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:42.618 18:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:42.618 18:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:42.618 18:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.618 18:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:42.618 18:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:42.618 [2024-12-09 18:13:05.519122] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:42.618 [2024-12-09 18:13:05.519197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.618 [2024-12-09 18:13:05.519219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.618 [2024-12-09 18:13:05.519238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.618 [2024-12-09 18:13:05.519252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.618 [2024-12-09 18:13:05.519266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.618 [2024-12-09 18:13:05.519279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.618 [2024-12-09 18:13:05.519292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.618 [2024-12-09 18:13:05.519315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.618 [2024-12-09 18:13:05.519332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.618 [2024-12-09 18:13:05.519345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.618 [2024-12-09 18:13:05.519358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219be90 is same with the state(6) to be set 00:23:42.618 [2024-12-09 18:13:05.529142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219be90 (9): Bad file descriptor 00:23:42.618 [2024-12-09 18:13:05.539192] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:42.618 [2024-12-09 18:13:05.539218] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:42.618 [2024-12-09 18:13:05.539232] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:42.618 [2024-12-09 18:13:05.539245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:42.618 [2024-12-09 18:13:05.539303] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:43.552 18:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:43.552 18:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.552 18:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:43.552 18:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.552 18:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:43.552 18:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:43.552 18:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:43.552 [2024-12-09 18:13:06.582592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:43.552 [2024-12-09 18:13:06.582677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219be90 with addr=10.0.0.2, port=4420 00:23:43.552 [2024-12-09 18:13:06.582706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219be90 is same with the state(6) to be set 00:23:43.552 [2024-12-09 18:13:06.582757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219be90 (9): Bad file descriptor 00:23:43.552 [2024-12-09 18:13:06.583246] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:23:43.552 [2024-12-09 18:13:06.583295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:43.552 [2024-12-09 18:13:06.583313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:43.552 [2024-12-09 18:13:06.583331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:43.552 [2024-12-09 18:13:06.583345] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:43.552 [2024-12-09 18:13:06.583357] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:43.552 [2024-12-09 18:13:06.583366] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:43.552 [2024-12-09 18:13:06.583381] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:43.552 [2024-12-09 18:13:06.583390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:43.810 18:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.810 18:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:43.810 18:13:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:44.742 [2024-12-09 18:13:07.585897] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:44.742 [2024-12-09 18:13:07.585929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:44.742 [2024-12-09 18:13:07.585949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:44.742 [2024-12-09 18:13:07.585976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:44.742 [2024-12-09 18:13:07.585990] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:23:44.742 [2024-12-09 18:13:07.586002] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:44.742 [2024-12-09 18:13:07.586011] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:44.742 [2024-12-09 18:13:07.586018] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:44.742 [2024-12-09 18:13:07.586062] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:44.742 [2024-12-09 18:13:07.586119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.742 [2024-12-09 18:13:07.586141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.742 [2024-12-09 18:13:07.586162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.742 [2024-12-09 18:13:07.586175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.742 [2024-12-09 18:13:07.586189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.743 [2024-12-09 18:13:07.586201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.743 [2024-12-09 18:13:07.586215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.743 [2024-12-09 18:13:07.586228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.743 [2024-12-09 18:13:07.586241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.743 [2024-12-09 18:13:07.586254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.743 [2024-12-09 18:13:07.586268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:23:44.743 [2024-12-09 18:13:07.586316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b5e0 (9): Bad file descriptor 00:23:44.743 [2024-12-09 18:13:07.587313] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:44.743 [2024-12-09 18:13:07.587335] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:23:44.743 18:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:44.743 18:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.743 18:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:44.743 18:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.743 18:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:44.743 18:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:44.743 18:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:44.743 18:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.743 18:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:44.743 18:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.743 18:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.743 18:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:44.743 18:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:44.743 18:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.743 18:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:44.743 18:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.743 18:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:44.743 18:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:44.743 18:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:44.743 18:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.743 18:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:44.743 18:13:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:46.116 18:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:46.116 18:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.116 18:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:46.116 18:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.116 18:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:46.116 18:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:46.116 18:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:46.116 18:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.116 18:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:46.116 18:13:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:46.681 [2024-12-09 18:13:09.602199] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:46.681 [2024-12-09 18:13:09.602224] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:46.681 [2024-12-09 18:13:09.602247] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:46.939 [2024-12-09 18:13:09.730693] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:46.939 18:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:46.939 18:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.939 18:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:46.939 18:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.939 18:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:46.939 18:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:46.939 18:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:46.939 18:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.939 18:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:46.939 18:13:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:46.939 [2024-12-09 18:13:09.831518] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:23:46.939 [2024-12-09 18:13:09.832323] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x21c8f60:1 started. 00:23:46.939 [2024-12-09 18:13:09.833694] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:46.939 [2024-12-09 18:13:09.833744] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:46.939 [2024-12-09 18:13:09.833777] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:46.939 [2024-12-09 18:13:09.833801] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:46.939 [2024-12-09 18:13:09.833815] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:46.939 [2024-12-09 18:13:09.840915] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x21c8f60 was disconnected and freed. delete nvme_qpair. 00:23:47.872 18:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:47.872 18:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:47.872 18:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:47.872 18:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.872 18:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:47.872 18:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:47.872 18:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:47.872 18:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.872 18:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:47.872 18:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:47.872 18:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1555548 00:23:47.872 18:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1555548 ']' 00:23:47.872 18:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1555548 00:23:47.872 18:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:47.872 18:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:47.872 18:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1555548 00:23:47.872 18:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:47.872 18:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:47.872 18:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1555548' 00:23:47.872 killing process with pid 1555548 00:23:47.872 18:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1555548 00:23:47.872 18:13:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1555548 00:23:48.130 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:48.130 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:48.130 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:23:48.130 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:48.130 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:23:48.130 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:48.130 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:48.130 rmmod nvme_tcp 00:23:48.130 rmmod nvme_fabrics 00:23:48.130 rmmod nvme_keyring 00:23:48.130 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:48.130 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:23:48.130 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:23:48.130 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1555408 ']' 00:23:48.130 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1555408 00:23:48.389 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1555408 ']' 00:23:48.389 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1555408 00:23:48.389 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:48.389 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:48.389 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1555408 00:23:48.389 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:48.389 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:48.389 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1555408' 00:23:48.389 killing process with pid 1555408 00:23:48.389 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1555408 00:23:48.389 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1555408 00:23:48.647 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:48.647 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:48.647 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:48.647 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:23:48.647 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:23:48.647 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:23:48.647 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:48.647 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:48.647 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:48.647 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.647 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.647 18:13:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.551 18:13:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:50.551 00:23:50.551 real 0m17.916s 00:23:50.551 user 0m25.775s 00:23:50.551 sys 0m3.052s 00:23:50.551 18:13:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:50.551 18:13:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.551 ************************************ 00:23:50.551 END TEST nvmf_discovery_remove_ifc 00:23:50.551 ************************************ 00:23:50.551 18:13:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:50.551 18:13:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:50.551 18:13:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:50.551 18:13:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.551 ************************************ 00:23:50.551 START TEST nvmf_identify_kernel_target 00:23:50.551 ************************************ 00:23:50.551 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:50.551 * Looking for test storage... 00:23:50.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:50.551 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:50.551 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:23:50.551 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:50.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.810 --rc genhtml_branch_coverage=1 00:23:50.810 --rc genhtml_function_coverage=1 00:23:50.810 --rc genhtml_legend=1 00:23:50.810 --rc geninfo_all_blocks=1 00:23:50.810 --rc geninfo_unexecuted_blocks=1 00:23:50.810 00:23:50.810 ' 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:50.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.810 --rc genhtml_branch_coverage=1 00:23:50.810 --rc genhtml_function_coverage=1 00:23:50.810 --rc genhtml_legend=1 00:23:50.810 --rc geninfo_all_blocks=1 00:23:50.810 --rc geninfo_unexecuted_blocks=1 00:23:50.810 00:23:50.810 ' 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:50.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.810 --rc genhtml_branch_coverage=1 00:23:50.810 --rc genhtml_function_coverage=1 00:23:50.810 --rc genhtml_legend=1 00:23:50.810 --rc geninfo_all_blocks=1 00:23:50.810 --rc geninfo_unexecuted_blocks=1 00:23:50.810 00:23:50.810 ' 00:23:50.810 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:50.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.810 --rc genhtml_branch_coverage=1 00:23:50.810 --rc genhtml_function_coverage=1 00:23:50.810 --rc genhtml_legend=1 00:23:50.810 --rc geninfo_all_blocks=1 00:23:50.810 --rc geninfo_unexecuted_blocks=1 00:23:50.810 00:23:50.810 ' 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:50.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:23:50.811 18:13:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.716 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.716 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:23:52.716 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:52.716 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:52.716 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:52.716 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:52.716 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:52.716 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:23:52.716 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:52.716 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:23:52.716 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:23:52.716 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:23:52.716 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:23:52.716 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:23:52.975 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:23:52.975 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.975 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.975 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.975 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.975 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.975 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.975 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.975 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:52.975 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.975 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.975 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.975 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.975 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:52.975 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:52.975 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:52.975 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:52.975 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:52.975 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:52.976 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:52.976 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:52.976 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:52.976 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:52.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:23:52.976 00:23:52.976 --- 10.0.0.2 ping statistics --- 00:23:52.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.976 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:23:52.976 18:13:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:23:52.976 00:23:52.976 --- 10.0.0.1 ping statistics --- 00:23:52.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.976 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:23:52.976 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.976 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:23:52.976 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:52.976 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.976 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:52.976 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:52.976 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.976 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:52.976 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:53.236 18:13:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:54.171 Waiting for block devices as requested 00:23:54.429 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:54.429 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:54.687 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:54.687 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:54.687 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:54.687 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:54.946 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:54.946 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:54.946 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:54.946 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:55.205 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:55.205 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:55.205 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:55.466 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:55.466 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:55.466 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:55.466 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:55.725 No valid GPT data, bailing 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:55.725 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:55.985 00:23:55.985 Discovery Log Number of Records 2, Generation counter 2 00:23:55.985 =====Discovery Log Entry 0====== 00:23:55.985 trtype: tcp 00:23:55.985 adrfam: ipv4 00:23:55.985 subtype: current discovery subsystem 00:23:55.985 treq: not specified, sq flow control disable supported 00:23:55.985 portid: 1 00:23:55.985 trsvcid: 4420 00:23:55.985 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:55.985 traddr: 10.0.0.1 00:23:55.985 eflags: none 00:23:55.985 sectype: none 00:23:55.985 =====Discovery Log Entry 1====== 00:23:55.985 trtype: tcp 00:23:55.985 adrfam: ipv4 00:23:55.985 subtype: nvme subsystem 00:23:55.985 treq: not specified, sq flow control disable supported 00:23:55.985 portid: 1 00:23:55.985 trsvcid: 4420 00:23:55.985 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:55.985 traddr: 10.0.0.1 00:23:55.985 eflags: none 00:23:55.985 sectype: none 00:23:55.985 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:55.985 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:55.985 ===================================================== 00:23:55.985 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:55.985 ===================================================== 00:23:55.985 Controller Capabilities/Features 00:23:55.985 ================================ 00:23:55.985 Vendor ID: 0000 00:23:55.985 Subsystem Vendor ID: 0000 00:23:55.985 Serial Number: 386e68d548f6a7572db0 00:23:55.985 Model Number: Linux 00:23:55.985 Firmware Version: 6.8.9-20 00:23:55.985 Recommended Arb Burst: 0 00:23:55.985 IEEE OUI Identifier: 00 00 00 00:23:55.985 Multi-path I/O 00:23:55.985 May have multiple subsystem ports: No 00:23:55.985 May have multiple controllers: No 00:23:55.985 Associated with SR-IOV VF: No 00:23:55.985 Max Data Transfer Size: Unlimited 00:23:55.985 Max Number of Namespaces: 0 00:23:55.985 Max Number of I/O Queues: 1024 00:23:55.985 NVMe Specification Version (VS): 1.3 00:23:55.985 NVMe Specification Version (Identify): 1.3 00:23:55.985 Maximum Queue Entries: 1024 00:23:55.985 Contiguous Queues Required: No 00:23:55.985 Arbitration Mechanisms Supported 00:23:55.985 Weighted Round Robin: Not Supported 00:23:55.985 Vendor Specific: Not Supported 00:23:55.985 Reset Timeout: 7500 ms 00:23:55.985 Doorbell Stride: 4 bytes 00:23:55.985 NVM Subsystem Reset: Not Supported 00:23:55.985 Command Sets Supported 00:23:55.985 NVM Command Set: Supported 00:23:55.985 Boot Partition: Not Supported 00:23:55.985 Memory Page Size Minimum: 4096 bytes 00:23:55.985 Memory Page Size Maximum: 4096 bytes 00:23:55.985 Persistent Memory Region: Not Supported 00:23:55.985 Optional Asynchronous Events Supported 00:23:55.985 Namespace Attribute Notices: Not Supported 00:23:55.985 Firmware Activation Notices: Not Supported 00:23:55.985 ANA Change Notices: Not Supported 00:23:55.985 PLE Aggregate Log Change Notices: Not Supported 00:23:55.985 LBA Status Info Alert Notices: Not Supported 00:23:55.985 EGE Aggregate Log Change Notices: Not Supported 00:23:55.985 Normal NVM Subsystem Shutdown event: Not Supported 00:23:55.985 Zone Descriptor Change Notices: Not Supported 00:23:55.985 Discovery Log Change Notices: Supported 00:23:55.985 Controller Attributes 00:23:55.985 128-bit Host Identifier: Not Supported 00:23:55.985 Non-Operational Permissive Mode: Not Supported 00:23:55.985 NVM Sets: Not Supported 00:23:55.985 Read Recovery Levels: Not Supported 00:23:55.985 Endurance Groups: Not Supported 00:23:55.985 Predictable Latency Mode: Not Supported 00:23:55.985 Traffic Based Keep ALive: Not Supported 00:23:55.985 Namespace Granularity: Not Supported 00:23:55.985 SQ Associations: Not Supported 00:23:55.985 UUID List: Not Supported 00:23:55.985 Multi-Domain Subsystem: Not Supported 00:23:55.985 Fixed Capacity Management: Not Supported 00:23:55.985 Variable Capacity Management: Not Supported 00:23:55.985 Delete Endurance Group: Not Supported 00:23:55.985 Delete NVM Set: Not Supported 00:23:55.985 Extended LBA Formats Supported: Not Supported 00:23:55.985 Flexible Data Placement Supported: Not Supported 00:23:55.985 00:23:55.985 Controller Memory Buffer Support 00:23:55.985 ================================ 00:23:55.985 Supported: No 00:23:55.985 00:23:55.985 Persistent Memory Region Support 00:23:55.985 ================================ 00:23:55.985 Supported: No 00:23:55.985 00:23:55.985 Admin Command Set Attributes 00:23:55.985 ============================ 00:23:55.985 Security Send/Receive: Not Supported 00:23:55.985 Format NVM: Not Supported 00:23:55.985 Firmware Activate/Download: Not Supported 00:23:55.985 Namespace Management: Not Supported 00:23:55.985 Device Self-Test: Not Supported 00:23:55.985 Directives: Not Supported 00:23:55.985 NVMe-MI: Not Supported 00:23:55.985 Virtualization Management: Not Supported 00:23:55.985 Doorbell Buffer Config: Not Supported 00:23:55.985 Get LBA Status Capability: Not Supported 00:23:55.985 Command & Feature Lockdown Capability: Not Supported 00:23:55.985 Abort Command Limit: 1 00:23:55.985 Async Event Request Limit: 1 00:23:55.985 Number of Firmware Slots: N/A 00:23:55.985 Firmware Slot 1 Read-Only: N/A 00:23:55.985 Firmware Activation Without Reset: N/A 00:23:55.985 Multiple Update Detection Support: N/A 00:23:55.985 Firmware Update Granularity: No Information Provided 00:23:55.985 Per-Namespace SMART Log: No 00:23:55.985 Asymmetric Namespace Access Log Page: Not Supported 00:23:55.985 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:55.985 Command Effects Log Page: Not Supported 00:23:55.985 Get Log Page Extended Data: Supported 00:23:55.985 Telemetry Log Pages: Not Supported 00:23:55.985 Persistent Event Log Pages: Not Supported 00:23:55.985 Supported Log Pages Log Page: May Support 00:23:55.985 Commands Supported & Effects Log Page: Not Supported 00:23:55.985 Feature Identifiers & Effects Log Page:May Support 00:23:55.985 NVMe-MI Commands & Effects Log Page: May Support 00:23:55.985 Data Area 4 for Telemetry Log: Not Supported 00:23:55.985 Error Log Page Entries Supported: 1 00:23:55.985 Keep Alive: Not Supported 00:23:55.985 00:23:55.985 NVM Command Set Attributes 00:23:55.985 ========================== 00:23:55.985 Submission Queue Entry Size 00:23:55.985 Max: 1 00:23:55.985 Min: 1 00:23:55.985 Completion Queue Entry Size 00:23:55.985 Max: 1 00:23:55.985 Min: 1 00:23:55.985 Number of Namespaces: 0 00:23:55.985 Compare Command: Not Supported 00:23:55.985 Write Uncorrectable Command: Not Supported 00:23:55.985 Dataset Management Command: Not Supported 00:23:55.985 Write Zeroes Command: Not Supported 00:23:55.985 Set Features Save Field: Not Supported 00:23:55.985 Reservations: Not Supported 00:23:55.985 Timestamp: Not Supported 00:23:55.985 Copy: Not Supported 00:23:55.985 Volatile Write Cache: Not Present 00:23:55.985 Atomic Write Unit (Normal): 1 00:23:55.985 Atomic Write Unit (PFail): 1 00:23:55.985 Atomic Compare & Write Unit: 1 00:23:55.985 Fused Compare & Write: Not Supported 00:23:55.985 Scatter-Gather List 00:23:55.985 SGL Command Set: Supported 00:23:55.986 SGL Keyed: Not Supported 00:23:55.986 SGL Bit Bucket Descriptor: Not Supported 00:23:55.986 SGL Metadata Pointer: Not Supported 00:23:55.986 Oversized SGL: Not Supported 00:23:55.986 SGL Metadata Address: Not Supported 00:23:55.986 SGL Offset: Supported 00:23:55.986 Transport SGL Data Block: Not Supported 00:23:55.986 Replay Protected Memory Block: Not Supported 00:23:55.986 00:23:55.986 Firmware Slot Information 00:23:55.986 ========================= 00:23:55.986 Active slot: 0 00:23:55.986 00:23:55.986 00:23:55.986 Error Log 00:23:55.986 ========= 00:23:55.986 00:23:55.986 Active Namespaces 00:23:55.986 ================= 00:23:55.986 Discovery Log Page 00:23:55.986 ================== 00:23:55.986 Generation Counter: 2 00:23:55.986 Number of Records: 2 00:23:55.986 Record Format: 0 00:23:55.986 00:23:55.986 Discovery Log Entry 0 00:23:55.986 ---------------------- 00:23:55.986 Transport Type: 3 (TCP) 00:23:55.986 Address Family: 1 (IPv4) 00:23:55.986 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:55.986 Entry Flags: 00:23:55.986 Duplicate Returned Information: 0 00:23:55.986 Explicit Persistent Connection Support for Discovery: 0 00:23:55.986 Transport Requirements: 00:23:55.986 Secure Channel: Not Specified 00:23:55.986 Port ID: 1 (0x0001) 00:23:55.986 Controller ID: 65535 (0xffff) 00:23:55.986 Admin Max SQ Size: 32 00:23:55.986 Transport Service Identifier: 4420 00:23:55.986 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:55.986 Transport Address: 10.0.0.1 00:23:55.986 Discovery Log Entry 1 00:23:55.986 ---------------------- 00:23:55.986 Transport Type: 3 (TCP) 00:23:55.986 Address Family: 1 (IPv4) 00:23:55.986 Subsystem Type: 2 (NVM Subsystem) 00:23:55.986 Entry Flags: 00:23:55.986 Duplicate Returned Information: 0 00:23:55.986 Explicit Persistent Connection Support for Discovery: 0 00:23:55.986 Transport Requirements: 00:23:55.986 Secure Channel: Not Specified 00:23:55.986 Port ID: 1 (0x0001) 00:23:55.986 Controller ID: 65535 (0xffff) 00:23:55.986 Admin Max SQ Size: 32 00:23:55.986 Transport Service Identifier: 4420 00:23:55.986 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:55.986 Transport Address: 10.0.0.1 00:23:55.986 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:55.986 get_feature(0x01) failed 00:23:55.986 get_feature(0x02) failed 00:23:55.986 get_feature(0x04) failed 00:23:55.986 ===================================================== 00:23:55.986 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:55.986 ===================================================== 00:23:55.986 Controller Capabilities/Features 00:23:55.986 ================================ 00:23:55.986 Vendor ID: 0000 00:23:55.986 Subsystem Vendor ID: 0000 00:23:55.986 Serial Number: 405f52c5ea81b6050c64 00:23:55.986 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:55.986 Firmware Version: 6.8.9-20 00:23:55.986 Recommended Arb Burst: 6 00:23:55.986 IEEE OUI Identifier: 00 00 00 00:23:55.986 Multi-path I/O 00:23:55.986 May have multiple subsystem ports: Yes 00:23:55.986 May have multiple controllers: Yes 00:23:55.986 Associated with SR-IOV VF: No 00:23:55.986 Max Data Transfer Size: Unlimited 00:23:55.986 Max Number of Namespaces: 1024 00:23:55.986 Max Number of I/O Queues: 128 00:23:55.986 NVMe Specification Version (VS): 1.3 00:23:55.986 NVMe Specification Version (Identify): 1.3 00:23:55.986 Maximum Queue Entries: 1024 00:23:55.986 Contiguous Queues Required: No 00:23:55.986 Arbitration Mechanisms Supported 00:23:55.986 Weighted Round Robin: Not Supported 00:23:55.986 Vendor Specific: Not Supported 00:23:55.986 Reset Timeout: 7500 ms 00:23:55.986 Doorbell Stride: 4 bytes 00:23:55.986 NVM Subsystem Reset: Not Supported 00:23:55.986 Command Sets Supported 00:23:55.986 NVM Command Set: Supported 00:23:55.986 Boot Partition: Not Supported 00:23:55.986 Memory Page Size Minimum: 4096 bytes 00:23:55.986 Memory Page Size Maximum: 4096 bytes 00:23:55.986 Persistent Memory Region: Not Supported 00:23:55.986 Optional Asynchronous Events Supported 00:23:55.986 Namespace Attribute Notices: Supported 00:23:55.986 Firmware Activation Notices: Not Supported 00:23:55.986 ANA Change Notices: Supported 00:23:55.986 PLE Aggregate Log Change Notices: Not Supported 00:23:55.986 LBA Status Info Alert Notices: Not Supported 00:23:55.986 EGE Aggregate Log Change Notices: Not Supported 00:23:55.986 Normal NVM Subsystem Shutdown event: Not Supported 00:23:55.986 Zone Descriptor Change Notices: Not Supported 00:23:55.986 Discovery Log Change Notices: Not Supported 00:23:55.986 Controller Attributes 00:23:55.986 128-bit Host Identifier: Supported 00:23:55.986 Non-Operational Permissive Mode: Not Supported 00:23:55.986 NVM Sets: Not Supported 00:23:55.986 Read Recovery Levels: Not Supported 00:23:55.986 Endurance Groups: Not Supported 00:23:55.986 Predictable Latency Mode: Not Supported 00:23:55.986 Traffic Based Keep ALive: Supported 00:23:55.986 Namespace Granularity: Not Supported 00:23:55.986 SQ Associations: Not Supported 00:23:55.986 UUID List: Not Supported 00:23:55.986 Multi-Domain Subsystem: Not Supported 00:23:55.986 Fixed Capacity Management: Not Supported 00:23:55.986 Variable Capacity Management: Not Supported 00:23:55.986 Delete Endurance Group: Not Supported 00:23:55.986 Delete NVM Set: Not Supported 00:23:55.986 Extended LBA Formats Supported: Not Supported 00:23:55.986 Flexible Data Placement Supported: Not Supported 00:23:55.986 00:23:55.986 Controller Memory Buffer Support 00:23:55.986 ================================ 00:23:55.986 Supported: No 00:23:55.986 00:23:55.986 Persistent Memory Region Support 00:23:55.986 ================================ 00:23:55.986 Supported: No 00:23:55.986 00:23:55.986 Admin Command Set Attributes 00:23:55.986 ============================ 00:23:55.986 Security Send/Receive: Not Supported 00:23:55.986 Format NVM: Not Supported 00:23:55.986 Firmware Activate/Download: Not Supported 00:23:55.986 Namespace Management: Not Supported 00:23:55.986 Device Self-Test: Not Supported 00:23:55.986 Directives: Not Supported 00:23:55.986 NVMe-MI: Not Supported 00:23:55.986 Virtualization Management: Not Supported 00:23:55.986 Doorbell Buffer Config: Not Supported 00:23:55.986 Get LBA Status Capability: Not Supported 00:23:55.986 Command & Feature Lockdown Capability: Not Supported 00:23:55.986 Abort Command Limit: 4 00:23:55.986 Async Event Request Limit: 4 00:23:55.986 Number of Firmware Slots: N/A 00:23:55.986 Firmware Slot 1 Read-Only: N/A 00:23:55.986 Firmware Activation Without Reset: N/A 00:23:55.986 Multiple Update Detection Support: N/A 00:23:55.986 Firmware Update Granularity: No Information Provided 00:23:55.986 Per-Namespace SMART Log: Yes 00:23:55.986 Asymmetric Namespace Access Log Page: Supported 00:23:55.986 ANA Transition Time : 10 sec 00:23:55.986 00:23:55.986 Asymmetric Namespace Access Capabilities 00:23:55.986 ANA Optimized State : Supported 00:23:55.986 ANA Non-Optimized State : Supported 00:23:55.986 ANA Inaccessible State : Supported 00:23:55.986 ANA Persistent Loss State : Supported 00:23:55.986 ANA Change State : Supported 00:23:55.986 ANAGRPID is not changed : No 00:23:55.986 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:55.986 00:23:55.986 ANA Group Identifier Maximum : 128 00:23:55.986 Number of ANA Group Identifiers : 128 00:23:55.986 Max Number of Allowed Namespaces : 1024 00:23:55.986 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:55.987 Command Effects Log Page: Supported 00:23:55.987 Get Log Page Extended Data: Supported 00:23:55.987 Telemetry Log Pages: Not Supported 00:23:55.987 Persistent Event Log Pages: Not Supported 00:23:55.987 Supported Log Pages Log Page: May Support 00:23:55.987 Commands Supported & Effects Log Page: Not Supported 00:23:55.987 Feature Identifiers & Effects Log Page:May Support 00:23:55.987 NVMe-MI Commands & Effects Log Page: May Support 00:23:55.987 Data Area 4 for Telemetry Log: Not Supported 00:23:55.987 Error Log Page Entries Supported: 128 00:23:55.987 Keep Alive: Supported 00:23:55.987 Keep Alive Granularity: 1000 ms 00:23:55.987 00:23:55.987 NVM Command Set Attributes 00:23:55.987 ========================== 00:23:55.987 Submission Queue Entry Size 00:23:55.987 Max: 64 00:23:55.987 Min: 64 00:23:55.987 Completion Queue Entry Size 00:23:55.987 Max: 16 00:23:55.987 Min: 16 00:23:55.987 Number of Namespaces: 1024 00:23:55.987 Compare Command: Not Supported 00:23:55.987 Write Uncorrectable Command: Not Supported 00:23:55.987 Dataset Management Command: Supported 00:23:55.987 Write Zeroes Command: Supported 00:23:55.987 Set Features Save Field: Not Supported 00:23:55.987 Reservations: Not Supported 00:23:55.987 Timestamp: Not Supported 00:23:55.987 Copy: Not Supported 00:23:55.987 Volatile Write Cache: Present 00:23:55.987 Atomic Write Unit (Normal): 1 00:23:55.987 Atomic Write Unit (PFail): 1 00:23:55.987 Atomic Compare & Write Unit: 1 00:23:55.987 Fused Compare & Write: Not Supported 00:23:55.987 Scatter-Gather List 00:23:55.987 SGL Command Set: Supported 00:23:55.987 SGL Keyed: Not Supported 00:23:55.987 SGL Bit Bucket Descriptor: Not Supported 00:23:55.987 SGL Metadata Pointer: Not Supported 00:23:55.987 Oversized SGL: Not Supported 00:23:55.987 SGL Metadata Address: Not Supported 00:23:55.987 SGL Offset: Supported 00:23:55.987 Transport SGL Data Block: Not Supported 00:23:55.987 Replay Protected Memory Block: Not Supported 00:23:55.987 00:23:55.987 Firmware Slot Information 00:23:55.987 ========================= 00:23:55.987 Active slot: 0 00:23:55.987 00:23:55.987 Asymmetric Namespace Access 00:23:55.987 =========================== 00:23:55.987 Change Count : 0 00:23:55.987 Number of ANA Group Descriptors : 1 00:23:55.987 ANA Group Descriptor : 0 00:23:55.987 ANA Group ID : 1 00:23:55.987 Number of NSID Values : 1 00:23:55.987 Change Count : 0 00:23:55.987 ANA State : 1 00:23:55.987 Namespace Identifier : 1 00:23:55.987 00:23:55.987 Commands Supported and Effects 00:23:55.987 ============================== 00:23:55.987 Admin Commands 00:23:55.987 -------------- 00:23:55.987 Get Log Page (02h): Supported 00:23:55.987 Identify (06h): Supported 00:23:55.987 Abort (08h): Supported 00:23:55.987 Set Features (09h): Supported 00:23:55.987 Get Features (0Ah): Supported 00:23:55.987 Asynchronous Event Request (0Ch): Supported 00:23:55.987 Keep Alive (18h): Supported 00:23:55.987 I/O Commands 00:23:55.987 ------------ 00:23:55.987 Flush (00h): Supported 00:23:55.987 Write (01h): Supported LBA-Change 00:23:55.987 Read (02h): Supported 00:23:55.987 Write Zeroes (08h): Supported LBA-Change 00:23:55.987 Dataset Management (09h): Supported 00:23:55.987 00:23:55.987 Error Log 00:23:55.987 ========= 00:23:55.987 Entry: 0 00:23:55.987 Error Count: 0x3 00:23:55.987 Submission Queue Id: 0x0 00:23:55.987 Command Id: 0x5 00:23:55.987 Phase Bit: 0 00:23:55.987 Status Code: 0x2 00:23:55.987 Status Code Type: 0x0 00:23:55.987 Do Not Retry: 1 00:23:55.987 Error Location: 0x28 00:23:55.987 LBA: 0x0 00:23:55.987 Namespace: 0x0 00:23:55.987 Vendor Log Page: 0x0 00:23:55.987 ----------- 00:23:55.987 Entry: 1 00:23:55.987 Error Count: 0x2 00:23:55.987 Submission Queue Id: 0x0 00:23:55.987 Command Id: 0x5 00:23:55.987 Phase Bit: 0 00:23:55.987 Status Code: 0x2 00:23:55.987 Status Code Type: 0x0 00:23:55.987 Do Not Retry: 1 00:23:55.987 Error Location: 0x28 00:23:55.987 LBA: 0x0 00:23:55.987 Namespace: 0x0 00:23:55.987 Vendor Log Page: 0x0 00:23:55.987 ----------- 00:23:55.987 Entry: 2 00:23:55.987 Error Count: 0x1 00:23:55.987 Submission Queue Id: 0x0 00:23:55.987 Command Id: 0x4 00:23:55.987 Phase Bit: 0 00:23:55.987 Status Code: 0x2 00:23:55.987 Status Code Type: 0x0 00:23:55.987 Do Not Retry: 1 00:23:55.987 Error Location: 0x28 00:23:55.987 LBA: 0x0 00:23:55.987 Namespace: 0x0 00:23:55.987 Vendor Log Page: 0x0 00:23:55.987 00:23:55.987 Number of Queues 00:23:55.987 ================ 00:23:55.987 Number of I/O Submission Queues: 128 00:23:55.987 Number of I/O Completion Queues: 128 00:23:55.987 00:23:55.987 ZNS Specific Controller Data 00:23:55.987 ============================ 00:23:55.987 Zone Append Size Limit: 0 00:23:55.987 00:23:55.987 00:23:55.987 Active Namespaces 00:23:55.987 ================= 00:23:55.987 get_feature(0x05) failed 00:23:55.987 Namespace ID:1 00:23:55.987 Command Set Identifier: NVM (00h) 00:23:55.987 Deallocate: Supported 00:23:55.987 Deallocated/Unwritten Error: Not Supported 00:23:55.987 Deallocated Read Value: Unknown 00:23:55.987 Deallocate in Write Zeroes: Not Supported 00:23:55.987 Deallocated Guard Field: 0xFFFF 00:23:55.987 Flush: Supported 00:23:55.987 Reservation: Not Supported 00:23:55.987 Namespace Sharing Capabilities: Multiple Controllers 00:23:55.987 Size (in LBAs): 1953525168 (931GiB) 00:23:55.987 Capacity (in LBAs): 1953525168 (931GiB) 00:23:55.987 Utilization (in LBAs): 1953525168 (931GiB) 00:23:55.987 UUID: aefd1a7b-268a-4f8e-a3fc-9d2b94d13fbc 00:23:55.987 Thin Provisioning: Not Supported 00:23:55.987 Per-NS Atomic Units: Yes 00:23:55.987 Atomic Boundary Size (Normal): 0 00:23:55.987 Atomic Boundary Size (PFail): 0 00:23:55.987 Atomic Boundary Offset: 0 00:23:55.987 NGUID/EUI64 Never Reused: No 00:23:55.987 ANA group ID: 1 00:23:55.987 Namespace Write Protected: No 00:23:55.987 Number of LBA Formats: 1 00:23:55.987 Current LBA Format: LBA Format #00 00:23:55.987 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:55.987 00:23:55.987 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:55.987 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:55.987 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:23:55.987 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:55.987 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:23:55.987 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:55.987 18:13:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:55.987 rmmod nvme_tcp 00:23:55.987 rmmod nvme_fabrics 00:23:55.987 18:13:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:55.987 18:13:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:23:55.987 18:13:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:23:55.987 18:13:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:23:55.987 18:13:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:55.987 18:13:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:55.987 18:13:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:55.987 18:13:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:23:55.988 18:13:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:23:55.988 18:13:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:55.988 18:13:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:55.988 18:13:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:55.988 18:13:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:55.988 18:13:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.988 18:13:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.988 18:13:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.525 18:13:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:58.525 18:13:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:58.525 18:13:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:58.525 18:13:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:23:58.525 18:13:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:58.525 18:13:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:58.525 18:13:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:58.525 18:13:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:58.525 18:13:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:58.525 18:13:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:23:58.525 18:13:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:59.462 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:59.462 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:59.462 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:59.462 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:59.462 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:59.462 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:59.462 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:59.462 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:59.462 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:59.462 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:59.462 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:59.462 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:59.462 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:59.462 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:59.462 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:59.462 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:00.398 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:00.398 00:24:00.398 real 0m9.830s 00:24:00.398 user 0m2.155s 00:24:00.398 sys 0m3.596s 00:24:00.398 18:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:00.398 18:13:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.398 ************************************ 00:24:00.398 END TEST nvmf_identify_kernel_target 00:24:00.398 ************************************ 00:24:00.398 18:13:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:00.399 18:13:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:00.399 18:13:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:00.399 18:13:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.399 ************************************ 00:24:00.399 START TEST nvmf_auth_host 00:24:00.399 ************************************ 00:24:00.399 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:00.658 * Looking for test storage... 00:24:00.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:00.658 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:00.658 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:00.658 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:00.658 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:00.658 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:00.658 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:00.658 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:00.658 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:00.658 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:00.658 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:00.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.659 --rc genhtml_branch_coverage=1 00:24:00.659 --rc genhtml_function_coverage=1 00:24:00.659 --rc genhtml_legend=1 00:24:00.659 --rc geninfo_all_blocks=1 00:24:00.659 --rc geninfo_unexecuted_blocks=1 00:24:00.659 00:24:00.659 ' 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:00.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.659 --rc genhtml_branch_coverage=1 00:24:00.659 --rc genhtml_function_coverage=1 00:24:00.659 --rc genhtml_legend=1 00:24:00.659 --rc geninfo_all_blocks=1 00:24:00.659 --rc geninfo_unexecuted_blocks=1 00:24:00.659 00:24:00.659 ' 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:00.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.659 --rc genhtml_branch_coverage=1 00:24:00.659 --rc genhtml_function_coverage=1 00:24:00.659 --rc genhtml_legend=1 00:24:00.659 --rc geninfo_all_blocks=1 00:24:00.659 --rc geninfo_unexecuted_blocks=1 00:24:00.659 00:24:00.659 ' 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:00.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.659 --rc genhtml_branch_coverage=1 00:24:00.659 --rc genhtml_function_coverage=1 00:24:00.659 --rc genhtml_legend=1 00:24:00.659 --rc geninfo_all_blocks=1 00:24:00.659 --rc geninfo_unexecuted_blocks=1 00:24:00.659 00:24:00.659 ' 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:00.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:00.659 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:00.660 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:00.660 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:00.660 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:00.660 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:00.660 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.660 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:00.660 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:00.660 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:00.660 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.660 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.660 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.660 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:00.660 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:00.660 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:00.660 18:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.194 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:03.195 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:03.195 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:03.195 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:03.195 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:03.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:24:03.195 00:24:03.195 --- 10.0.0.2 ping statistics --- 00:24:03.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.195 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:24:03.195 00:24:03.195 --- 10.0.0.1 ping statistics --- 00:24:03.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.195 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1562644 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1562644 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1562644 ']' 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.195 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.196 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.196 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.196 18:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0c9ff3d25954714a1292095ef231c130 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.CTY 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0c9ff3d25954714a1292095ef231c130 0 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0c9ff3d25954714a1292095ef231c130 0 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0c9ff3d25954714a1292095ef231c130 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:03.196 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.CTY 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.CTY 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.CTY 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e3d7abe2096332c67bc04a3f062fd232af4fba442d6d47a402375aaedb80e72d 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Yir 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e3d7abe2096332c67bc04a3f062fd232af4fba442d6d47a402375aaedb80e72d 3 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e3d7abe2096332c67bc04a3f062fd232af4fba442d6d47a402375aaedb80e72d 3 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e3d7abe2096332c67bc04a3f062fd232af4fba442d6d47a402375aaedb80e72d 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Yir 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Yir 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Yir 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b61c079589d8cf042f5e84bfca5efe0e3ff4943aaac8e677 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.kf1 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b61c079589d8cf042f5e84bfca5efe0e3ff4943aaac8e677 0 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b61c079589d8cf042f5e84bfca5efe0e3ff4943aaac8e677 0 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b61c079589d8cf042f5e84bfca5efe0e3ff4943aaac8e677 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:03.454 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.kf1 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.kf1 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.kf1 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=66f5b50c4eada9ba4ab72b3127056f6d673d9c9a198799ef 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.JQJ 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 66f5b50c4eada9ba4ab72b3127056f6d673d9c9a198799ef 2 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 66f5b50c4eada9ba4ab72b3127056f6d673d9c9a198799ef 2 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=66f5b50c4eada9ba4ab72b3127056f6d673d9c9a198799ef 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.JQJ 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.JQJ 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.JQJ 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9ad51da2adcdcba31d2cbb1a2df080a5 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.itJ 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9ad51da2adcdcba31d2cbb1a2df080a5 1 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9ad51da2adcdcba31d2cbb1a2df080a5 1 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9ad51da2adcdcba31d2cbb1a2df080a5 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.itJ 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.itJ 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.itJ 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5d1066f2ec39fda4e9b8d42e2912f9a3 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.JJJ 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5d1066f2ec39fda4e9b8d42e2912f9a3 1 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5d1066f2ec39fda4e9b8d42e2912f9a3 1 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5d1066f2ec39fda4e9b8d42e2912f9a3 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:03.455 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.JJJ 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.JJJ 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.JJJ 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7a0b48ef8bcd0b8fe04739437d3e71b904bce62b04cac0dc 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.tDC 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7a0b48ef8bcd0b8fe04739437d3e71b904bce62b04cac0dc 2 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7a0b48ef8bcd0b8fe04739437d3e71b904bce62b04cac0dc 2 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7a0b48ef8bcd0b8fe04739437d3e71b904bce62b04cac0dc 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.tDC 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.tDC 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.tDC 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ff94b7b08381d697afd9dc07652cf3bf 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Wb1 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ff94b7b08381d697afd9dc07652cf3bf 0 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ff94b7b08381d697afd9dc07652cf3bf 0 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ff94b7b08381d697afd9dc07652cf3bf 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Wb1 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Wb1 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Wb1 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=296f6ec8bd1f04ca703032b6de7736c3a0e4187535f97f76dfe16b4990335031 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Y5y 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 296f6ec8bd1f04ca703032b6de7736c3a0e4187535f97f76dfe16b4990335031 3 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 296f6ec8bd1f04ca703032b6de7736c3a0e4187535f97f76dfe16b4990335031 3 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=296f6ec8bd1f04ca703032b6de7736c3a0e4187535f97f76dfe16b4990335031 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Y5y 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Y5y 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Y5y 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1562644 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1562644 ']' 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.714 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CTY 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Yir ]] 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Yir 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.kf1 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.JQJ ]] 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JQJ 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.itJ 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.JJJ ]] 00:24:03.973 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JJJ 00:24:03.974 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.974 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.974 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.974 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:03.974 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.tDC 00:24:03.974 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.974 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.974 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.974 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Wb1 ]] 00:24:03.974 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Wb1 00:24:03.974 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.974 18:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.974 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.974 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:03.974 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Y5y 00:24:03.974 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.974 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:04.232 18:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:05.168 Waiting for block devices as requested 00:24:05.426 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:05.426 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:05.685 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:05.685 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:05.685 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:05.685 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:05.944 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:05.944 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:05.944 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:05.944 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:06.202 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:06.202 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:06.202 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:06.202 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:06.460 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:06.460 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:06.460 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:07.097 No valid GPT data, bailing 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:24:07.097 00:24:07.097 Discovery Log Number of Records 2, Generation counter 2 00:24:07.097 =====Discovery Log Entry 0====== 00:24:07.097 trtype: tcp 00:24:07.097 adrfam: ipv4 00:24:07.097 subtype: current discovery subsystem 00:24:07.097 treq: not specified, sq flow control disable supported 00:24:07.097 portid: 1 00:24:07.097 trsvcid: 4420 00:24:07.097 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:07.097 traddr: 10.0.0.1 00:24:07.097 eflags: none 00:24:07.097 sectype: none 00:24:07.097 =====Discovery Log Entry 1====== 00:24:07.097 trtype: tcp 00:24:07.097 adrfam: ipv4 00:24:07.097 subtype: nvme subsystem 00:24:07.097 treq: not specified, sq flow control disable supported 00:24:07.097 portid: 1 00:24:07.097 trsvcid: 4420 00:24:07.097 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:07.097 traddr: 10.0.0.1 00:24:07.097 eflags: none 00:24:07.097 sectype: none 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: ]] 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:07.097 18:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:07.097 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:07.097 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.097 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:07.097 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:07.097 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:07.097 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.097 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:07.097 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.097 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.097 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.097 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.097 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:07.097 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:07.097 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:07.097 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.097 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.097 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:07.097 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.097 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:07.097 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:07.097 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:07.097 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:07.098 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.098 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.359 nvme0n1 00:24:07.359 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.359 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.359 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.359 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.359 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.359 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.359 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.359 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.359 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.359 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.359 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.359 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:07.359 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:07.359 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.359 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:07.359 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.359 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.359 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: ]] 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.360 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.618 nvme0n1 00:24:07.618 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.618 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.618 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.618 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.618 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.618 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.618 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.618 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.618 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.618 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.618 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.618 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.618 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:07.618 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.618 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.618 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:07.618 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: ]] 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.619 nvme0n1 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.619 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: ]] 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.878 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.878 nvme0n1 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: ]] 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.879 18:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.138 nvme0n1 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.138 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.397 nvme0n1 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: ]] 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.397 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.655 nvme0n1 00:24:08.655 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.655 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.655 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.655 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.655 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.655 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.655 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.655 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.655 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.655 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.655 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.655 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.655 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:08.655 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.655 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.655 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: ]] 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.656 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.914 nvme0n1 00:24:08.914 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.914 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.914 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.914 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.914 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.914 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.914 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.914 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.914 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.914 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.914 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.914 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.914 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:08.914 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.914 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.914 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:08.914 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:08.914 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:08.914 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:08.914 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.914 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:08.914 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: ]] 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.915 18:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.174 nvme0n1 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: ]] 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.174 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.433 nvme0n1 00:24:09.433 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.433 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.433 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.433 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.433 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.433 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.433 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.433 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.433 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.433 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.433 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.433 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.433 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:09.433 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.433 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.433 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:09.433 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:09.433 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:09.433 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:09.433 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.434 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.692 nvme0n1 00:24:09.692 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.692 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.692 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.692 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.692 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.692 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.692 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.692 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.692 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.692 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.692 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.692 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:09.692 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.692 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: ]] 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.693 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.951 nvme0n1 00:24:09.951 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.951 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.951 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.951 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.951 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.951 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.951 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.951 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.951 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.951 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.951 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.951 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.951 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:09.951 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: ]] 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.952 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.209 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.209 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.210 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:10.210 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.210 18:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.468 nvme0n1 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: ]] 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.468 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.727 nvme0n1 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: ]] 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.727 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.728 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.728 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.728 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:10.728 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.728 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.986 nvme0n1 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.986 18:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.986 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.986 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.987 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.987 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.987 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.987 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.987 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.987 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.987 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.987 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.987 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.987 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.987 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:10.987 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.987 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.245 nvme0n1 00:24:11.245 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.245 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.245 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.245 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.245 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.503 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.503 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.503 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: ]] 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.504 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.071 nvme0n1 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: ]] 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.071 18:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.636 nvme0n1 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: ]] 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.636 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.202 nvme0n1 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: ]] 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.202 18:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.202 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.202 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.202 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.202 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.202 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.202 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.202 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.202 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.202 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.202 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.202 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.202 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.202 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:13.202 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.202 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.768 nvme0n1 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:13.768 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:13.769 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:13.769 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.769 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:13.769 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.769 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.769 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.769 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.769 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.769 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.769 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.769 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.769 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.769 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.769 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.769 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.769 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.769 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.769 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:13.769 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.769 18:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.027 nvme0n1 00:24:14.027 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.027 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.027 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.027 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.027 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.027 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: ]] 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.285 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.220 nvme0n1 00:24:15.220 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.220 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.220 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.220 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.220 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.220 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.220 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.220 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.220 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.220 18:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: ]] 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.220 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:15.221 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.221 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:15.221 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:15.221 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:15.221 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:15.221 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.221 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.154 nvme0n1 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: ]] 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.154 18:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.721 nvme0n1 00:24:16.721 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.721 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.721 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.721 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.721 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: ]] 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.980 18:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.915 nvme0n1 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:17.915 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.916 18:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.851 nvme0n1 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: ]] 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.851 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.852 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.852 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.852 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.852 nvme0n1 00:24:18.852 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.852 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.852 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.852 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.852 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.852 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.852 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.852 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.852 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.852 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: ]] 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.111 18:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.111 nvme0n1 00:24:19.111 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.111 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.111 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.111 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.111 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.111 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.111 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.111 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.111 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.111 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.111 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.111 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.111 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:19.111 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.111 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:19.111 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:19.111 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:19.111 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:19.111 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:19.111 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:19.111 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:19.111 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: ]] 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.112 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.371 nvme0n1 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: ]] 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.371 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.630 nvme0n1 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.630 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.889 nvme0n1 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: ]] 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.889 18:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.147 nvme0n1 00:24:20.147 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.147 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.147 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.147 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.147 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.147 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.147 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.147 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.147 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.147 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.147 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.147 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.147 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:20.147 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.147 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.147 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:20.147 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:20.147 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:20.147 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: ]] 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.148 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.406 nvme0n1 00:24:20.406 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: ]] 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.407 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.665 nvme0n1 00:24:20.665 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.665 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.665 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.665 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.665 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.665 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.665 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.665 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.665 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.665 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.665 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.665 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.665 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:20.665 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: ]] 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.666 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.924 nvme0n1 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.924 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.925 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.925 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.925 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.925 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:20.925 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.925 18:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.183 nvme0n1 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: ]] 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:21.183 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.184 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:21.184 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.184 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.184 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.184 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.184 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.184 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.184 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.184 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.184 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.184 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.184 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.184 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.184 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.184 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.184 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:21.184 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.184 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.442 nvme0n1 00:24:21.442 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.442 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.442 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.442 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.442 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.442 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.442 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.442 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.442 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.442 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.442 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.442 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.442 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:21.442 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.442 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.442 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: ]] 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.443 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.701 nvme0n1 00:24:21.701 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.701 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.701 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.701 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.701 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.701 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.701 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.701 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.701 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.701 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: ]] 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.959 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.960 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.960 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.960 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.960 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.960 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.960 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.960 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:21.960 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.960 18:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.218 nvme0n1 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: ]] 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.218 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.476 nvme0n1 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.476 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.734 nvme0n1 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: ]] 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.734 18:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.301 nvme0n1 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: ]] 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.301 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.868 nvme0n1 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: ]] 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.868 18:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.436 nvme0n1 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: ]] 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.436 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.004 nvme0n1 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.004 18:13:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.571 nvme0n1 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: ]] 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.571 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.572 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.572 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.572 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.572 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.572 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.572 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.572 18:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.507 nvme0n1 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: ]] 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.507 18:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.442 nvme0n1 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: ]] 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.442 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.443 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:27.443 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.443 18:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.377 nvme0n1 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: ]] 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.378 18:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.315 nvme0n1 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.315 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.316 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.316 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.316 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.316 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.316 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.316 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.316 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.316 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.316 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.316 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.316 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.316 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.316 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:29.316 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.316 18:13:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.334 nvme0n1 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:30.334 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: ]] 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.335 nvme0n1 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: ]] 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.335 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.593 nvme0n1 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: ]] 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.593 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.851 nvme0n1 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: ]] 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.851 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.109 nvme0n1 00:24:31.109 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.109 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.109 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.109 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.109 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.109 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.109 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.109 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.109 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.109 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.109 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.109 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.109 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:31.109 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.109 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.109 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:31.109 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:31.109 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.110 18:13:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.110 nvme0n1 00:24:31.110 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.110 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.110 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.110 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.110 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: ]] 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.368 nvme0n1 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.368 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: ]] 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.625 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.626 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.626 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.626 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.626 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.626 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.626 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.626 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.626 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.626 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.626 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.626 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:31.626 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.626 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.626 nvme0n1 00:24:31.626 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.626 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.626 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.626 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.626 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.626 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: ]] 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.883 nvme0n1 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.883 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.140 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.140 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.140 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.140 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.140 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.140 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: ]] 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.141 18:13:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.141 nvme0n1 00:24:32.141 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.141 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.141 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.141 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.141 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.141 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.398 nvme0n1 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.398 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: ]] 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.655 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.656 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.656 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.656 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.656 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.656 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.656 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.656 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.656 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.656 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.913 nvme0n1 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: ]] 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.913 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.914 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.914 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.914 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.914 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.914 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.914 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.914 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.914 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.914 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.914 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.914 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.914 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:32.914 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.914 18:13:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.172 nvme0n1 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: ]] 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.172 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.173 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.173 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:33.173 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.173 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.430 nvme0n1 00:24:33.430 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.430 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.430 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.430 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.430 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.430 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.430 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.430 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.430 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: ]] 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.688 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.689 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.689 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.689 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.689 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.689 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.689 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.689 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:33.689 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.689 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.947 nvme0n1 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.947 18:13:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.218 nvme0n1 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: ]] 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.218 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.783 nvme0n1 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: ]] 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.783 18:13:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.409 nvme0n1 00:24:35.409 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.409 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.409 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.409 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.409 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.409 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.409 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.409 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.409 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.409 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.409 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.409 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.409 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:35.409 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.409 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:35.409 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:35.409 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:35.409 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:35.409 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:35.409 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: ]] 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.410 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.975 nvme0n1 00:24:35.975 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.975 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.975 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.975 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.975 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.975 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.975 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.975 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: ]] 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.976 18:13:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.541 nvme0n1 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.541 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.107 nvme0n1 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ZmYzZDI1OTU0NzE0YTEyOTIwOTVlZjIzMWMxMzA9bNbY: 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: ]] 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNkN2FiZTIwOTYzMzJjNjdiYzA0YTNmMDYyZmQyMzJhZjRmYmE0NDJkNmQ0N2E0MDIzNzVhYWVkYjgwZTcyZFVkOFg=: 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.107 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.108 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.108 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.108 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:37.108 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.108 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:37.108 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:37.108 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:37.108 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:37.108 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.108 18:13:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.041 nvme0n1 00:24:38.041 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.041 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.041 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.041 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.041 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.041 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.041 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.041 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.041 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.041 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.041 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.041 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.041 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:38.041 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.041 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:38.041 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:38.041 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: ]] 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.042 18:14:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.974 nvme0n1 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: ]] 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.974 18:14:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.907 nvme0n1 00:24:39.907 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.907 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.907 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.907 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.907 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.907 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.907 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2EwYjQ4ZWY4YmNkMGI4ZmUwNDczOTQzN2QzZTcxYjkwNGJjZTYyYjA0Y2FjMGRj1q7URA==: 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: ]] 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5NGI3YjA4MzgxZDY5N2FmZDlkYzA3NjUyY2YzYmZQ/dUc: 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.908 18:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.842 nvme0n1 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjk2ZjZlYzhiZDFmMDRjYTcwMzAzMmI2ZGU3NzM2YzNhMGU0MTg3NTM1Zjk3Zjc2ZGZlMTZiNDk5MDMzNTAzMZ7cINo=: 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.842 18:14:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.776 nvme0n1 00:24:41.776 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.776 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.776 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.776 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.776 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.776 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.776 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.776 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.776 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.776 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.776 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.776 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: ]] 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.777 request: 00:24:41.777 { 00:24:41.777 "name": "nvme0", 00:24:41.777 "trtype": "tcp", 00:24:41.777 "traddr": "10.0.0.1", 00:24:41.777 "adrfam": "ipv4", 00:24:41.777 "trsvcid": "4420", 00:24:41.777 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:41.777 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:41.777 "prchk_reftag": false, 00:24:41.777 "prchk_guard": false, 00:24:41.777 "hdgst": false, 00:24:41.777 "ddgst": false, 00:24:41.777 "allow_unrecognized_csi": false, 00:24:41.777 "method": "bdev_nvme_attach_controller", 00:24:41.777 "req_id": 1 00:24:41.777 } 00:24:41.777 Got JSON-RPC error response 00:24:41.777 response: 00:24:41.777 { 00:24:41.777 "code": -5, 00:24:41.777 "message": "Input/output error" 00:24:41.777 } 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.777 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.035 request: 00:24:42.035 { 00:24:42.035 "name": "nvme0", 00:24:42.035 "trtype": "tcp", 00:24:42.035 "traddr": "10.0.0.1", 00:24:42.035 "adrfam": "ipv4", 00:24:42.035 "trsvcid": "4420", 00:24:42.035 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:42.035 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:42.035 "prchk_reftag": false, 00:24:42.035 "prchk_guard": false, 00:24:42.035 "hdgst": false, 00:24:42.035 "ddgst": false, 00:24:42.035 "dhchap_key": "key2", 00:24:42.035 "allow_unrecognized_csi": false, 00:24:42.035 "method": "bdev_nvme_attach_controller", 00:24:42.035 "req_id": 1 00:24:42.035 } 00:24:42.035 Got JSON-RPC error response 00:24:42.035 response: 00:24:42.035 { 00:24:42.035 "code": -5, 00:24:42.035 "message": "Input/output error" 00:24:42.035 } 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.036 request: 00:24:42.036 { 00:24:42.036 "name": "nvme0", 00:24:42.036 "trtype": "tcp", 00:24:42.036 "traddr": "10.0.0.1", 00:24:42.036 "adrfam": "ipv4", 00:24:42.036 "trsvcid": "4420", 00:24:42.036 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:42.036 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:42.036 "prchk_reftag": false, 00:24:42.036 "prchk_guard": false, 00:24:42.036 "hdgst": false, 00:24:42.036 "ddgst": false, 00:24:42.036 "dhchap_key": "key1", 00:24:42.036 "dhchap_ctrlr_key": "ckey2", 00:24:42.036 "allow_unrecognized_csi": false, 00:24:42.036 "method": "bdev_nvme_attach_controller", 00:24:42.036 "req_id": 1 00:24:42.036 } 00:24:42.036 Got JSON-RPC error response 00:24:42.036 response: 00:24:42.036 { 00:24:42.036 "code": -5, 00:24:42.036 "message": "Input/output error" 00:24:42.036 } 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.036 18:14:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.294 nvme0n1 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: ]] 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.294 request: 00:24:42.294 { 00:24:42.294 "name": "nvme0", 00:24:42.294 "dhchap_key": "key1", 00:24:42.294 "dhchap_ctrlr_key": "ckey2", 00:24:42.294 "method": "bdev_nvme_set_keys", 00:24:42.294 "req_id": 1 00:24:42.294 } 00:24:42.294 Got JSON-RPC error response 00:24:42.294 response: 00:24:42.294 { 00:24:42.294 "code": -13, 00:24:42.294 "message": "Permission denied" 00:24:42.294 } 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:42.294 18:14:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:43.667 18:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.667 18:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:43.667 18:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.667 18:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.667 18:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.667 18:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:43.667 18:14:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYxYzA3OTU4OWQ4Y2YwNDJmNWU4NGJmY2E1ZWZlMGUzZmY0OTQzYWFhYzhlNjc34iYuBg==: 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: ]] 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjZmNWI1MGM0ZWFkYTliYTRhYjcyYjMxMjcwNTZmNmQ2NzNkOWM5YTE5ODc5OWVm2552vw==: 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.600 nvme0n1 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWFkNTFkYTJhZGNkY2JhMzFkMmNiYjFhMmRmMDgwYTXEvqL6: 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: ]] 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxMDY2ZjJlYzM5ZmRhNGU5YjhkNDJlMjkxMmY5YTMZboNG: 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.600 request: 00:24:44.600 { 00:24:44.600 "name": "nvme0", 00:24:44.600 "dhchap_key": "key2", 00:24:44.600 "dhchap_ctrlr_key": "ckey1", 00:24:44.600 "method": "bdev_nvme_set_keys", 00:24:44.600 "req_id": 1 00:24:44.600 } 00:24:44.600 Got JSON-RPC error response 00:24:44.600 response: 00:24:44.600 { 00:24:44.600 "code": -13, 00:24:44.600 "message": "Permission denied" 00:24:44.600 } 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.600 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.858 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:24:44.858 18:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:45.790 rmmod nvme_tcp 00:24:45.790 rmmod nvme_fabrics 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1562644 ']' 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1562644 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1562644 ']' 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1562644 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1562644 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1562644' 00:24:45.790 killing process with pid 1562644 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1562644 00:24:45.790 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1562644 00:24:46.048 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:46.048 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:46.048 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:46.048 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:24:46.048 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:24:46.048 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:46.048 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:46.048 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:46.048 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:46.048 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.048 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.048 18:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.579 18:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:48.579 18:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:48.579 18:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:48.579 18:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:48.579 18:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:48.579 18:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:24:48.579 18:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:48.579 18:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:48.579 18:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:48.579 18:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:48.579 18:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:48.579 18:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:48.579 18:14:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:49.515 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:49.515 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:49.515 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:49.515 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:49.515 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:49.515 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:49.515 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:49.515 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:49.515 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:49.515 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:49.515 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:49.515 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:49.515 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:49.515 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:49.515 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:49.515 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:50.450 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:50.450 18:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.CTY /tmp/spdk.key-null.kf1 /tmp/spdk.key-sha256.itJ /tmp/spdk.key-sha384.tDC /tmp/spdk.key-sha512.Y5y /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:50.450 18:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:51.827 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:51.827 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:51.827 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:51.827 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:51.827 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:51.827 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:51.827 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:51.827 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:51.827 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:51.827 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:51.827 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:51.827 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:51.827 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:51.827 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:51.827 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:51.827 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:51.827 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:52.085 00:24:52.085 real 0m51.481s 00:24:52.085 user 0m49.211s 00:24:52.085 sys 0m6.193s 00:24:52.085 18:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:52.085 18:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.085 ************************************ 00:24:52.085 END TEST nvmf_auth_host 00:24:52.085 ************************************ 00:24:52.085 18:14:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:24:52.085 18:14:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:52.085 18:14:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:52.085 18:14:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:52.085 18:14:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.085 ************************************ 00:24:52.085 START TEST nvmf_digest 00:24:52.085 ************************************ 00:24:52.085 18:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:52.085 * Looking for test storage... 00:24:52.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:52.085 18:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:52.085 18:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:24:52.085 18:14:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:52.085 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:52.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.085 --rc genhtml_branch_coverage=1 00:24:52.085 --rc genhtml_function_coverage=1 00:24:52.085 --rc genhtml_legend=1 00:24:52.085 --rc geninfo_all_blocks=1 00:24:52.085 --rc geninfo_unexecuted_blocks=1 00:24:52.085 00:24:52.086 ' 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:52.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.086 --rc genhtml_branch_coverage=1 00:24:52.086 --rc genhtml_function_coverage=1 00:24:52.086 --rc genhtml_legend=1 00:24:52.086 --rc geninfo_all_blocks=1 00:24:52.086 --rc geninfo_unexecuted_blocks=1 00:24:52.086 00:24:52.086 ' 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:52.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.086 --rc genhtml_branch_coverage=1 00:24:52.086 --rc genhtml_function_coverage=1 00:24:52.086 --rc genhtml_legend=1 00:24:52.086 --rc geninfo_all_blocks=1 00:24:52.086 --rc geninfo_unexecuted_blocks=1 00:24:52.086 00:24:52.086 ' 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:52.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.086 --rc genhtml_branch_coverage=1 00:24:52.086 --rc genhtml_function_coverage=1 00:24:52.086 --rc genhtml_legend=1 00:24:52.086 --rc geninfo_all_blocks=1 00:24:52.086 --rc geninfo_unexecuted_blocks=1 00:24:52.086 00:24:52.086 ' 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:52.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:24:52.086 18:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:54.619 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:54.619 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:54.619 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:54.619 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.619 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:54.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:24:54.620 00:24:54.620 --- 10.0.0.2 ping statistics --- 00:24:54.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.620 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:54.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:24:54.620 00:24:54.620 --- 10.0.0.1 ping statistics --- 00:24:54.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.620 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:54.620 ************************************ 00:24:54.620 START TEST nvmf_digest_clean 00:24:54.620 ************************************ 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1572277 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1572277 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1572277 ']' 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:54.620 [2024-12-09 18:14:17.349683] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:24:54.620 [2024-12-09 18:14:17.349759] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.620 [2024-12-09 18:14:17.421664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.620 [2024-12-09 18:14:17.477764] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.620 [2024-12-09 18:14:17.477816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.620 [2024-12-09 18:14:17.477829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.620 [2024-12-09 18:14:17.477840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.620 [2024-12-09 18:14:17.477850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.620 [2024-12-09 18:14:17.478411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.620 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:54.878 null0 00:24:54.878 [2024-12-09 18:14:17.722786] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.878 [2024-12-09 18:14:17.747013] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.878 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.878 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:54.878 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:54.878 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:54.878 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:54.878 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:54.878 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:54.878 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:54.878 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1572305 00:24:54.878 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:54.878 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1572305 /var/tmp/bperf.sock 00:24:54.878 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1572305 ']' 00:24:54.878 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:54.878 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.878 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:54.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:54.878 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.878 18:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:54.879 [2024-12-09 18:14:17.793251] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:24:54.879 [2024-12-09 18:14:17.793324] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1572305 ] 00:24:54.879 [2024-12-09 18:14:17.860782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.136 [2024-12-09 18:14:17.919134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.136 18:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:55.136 18:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:55.136 18:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:55.136 18:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:55.136 18:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:55.394 18:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:55.394 18:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:55.958 nvme0n1 00:24:55.958 18:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:55.958 18:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:56.216 Running I/O for 2 seconds... 00:24:58.082 18779.00 IOPS, 73.36 MiB/s [2024-12-09T17:14:21.123Z] 18750.50 IOPS, 73.24 MiB/s 00:24:58.082 Latency(us) 00:24:58.082 [2024-12-09T17:14:21.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.082 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:58.082 nvme0n1 : 2.01 18765.52 73.30 0.00 0.00 6813.67 3616.62 15049.01 00:24:58.082 [2024-12-09T17:14:21.123Z] =================================================================================================================== 00:24:58.082 [2024-12-09T17:14:21.123Z] Total : 18765.52 73.30 0.00 0.00 6813.67 3616.62 15049.01 00:24:58.082 { 00:24:58.082 "results": [ 00:24:58.082 { 00:24:58.082 "job": "nvme0n1", 00:24:58.082 "core_mask": "0x2", 00:24:58.082 "workload": "randread", 00:24:58.082 "status": "finished", 00:24:58.082 "queue_depth": 128, 00:24:58.082 "io_size": 4096, 00:24:58.082 "runtime": 2.00522, 00:24:58.082 "iops": 18765.52198761233, 00:24:58.082 "mibps": 73.30282026411066, 00:24:58.082 "io_failed": 0, 00:24:58.082 "io_timeout": 0, 00:24:58.082 "avg_latency_us": 6813.6680995646575, 00:24:58.082 "min_latency_us": 3616.6162962962962, 00:24:58.082 "max_latency_us": 15049.007407407407 00:24:58.082 } 00:24:58.082 ], 00:24:58.082 "core_count": 1 00:24:58.082 } 00:24:58.082 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:58.082 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:58.082 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:58.082 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:58.082 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:58.082 | select(.opcode=="crc32c") 00:24:58.082 | "\(.module_name) \(.executed)"' 00:24:58.339 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:58.339 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:58.339 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:58.339 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:58.339 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1572305 00:24:58.339 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1572305 ']' 00:24:58.339 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1572305 00:24:58.339 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:58.339 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:58.339 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1572305 00:24:58.596 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:58.596 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:58.596 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1572305' 00:24:58.596 killing process with pid 1572305 00:24:58.596 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1572305 00:24:58.596 Received shutdown signal, test time was about 2.000000 seconds 00:24:58.596 00:24:58.596 Latency(us) 00:24:58.596 [2024-12-09T17:14:21.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.596 [2024-12-09T17:14:21.637Z] =================================================================================================================== 00:24:58.596 [2024-12-09T17:14:21.637Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:58.596 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1572305 00:24:58.596 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:58.596 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:58.596 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:58.596 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:58.596 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:58.596 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:58.596 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:58.597 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1572827 00:24:58.597 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:58.597 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1572827 /var/tmp/bperf.sock 00:24:58.597 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1572827 ']' 00:24:58.597 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:58.597 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.597 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:58.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:58.597 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.597 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:58.854 [2024-12-09 18:14:21.653247] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:24:58.854 [2024-12-09 18:14:21.653324] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1572827 ] 00:24:58.854 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:58.854 Zero copy mechanism will not be used. 00:24:58.854 [2024-12-09 18:14:21.720002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.854 [2024-12-09 18:14:21.775230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.854 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:58.854 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:58.854 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:58.854 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:58.854 18:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:59.420 18:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:59.420 18:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:59.678 nvme0n1 00:24:59.678 18:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:59.678 18:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:59.678 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:59.678 Zero copy mechanism will not be used. 00:24:59.678 Running I/O for 2 seconds... 00:25:01.984 5747.00 IOPS, 718.38 MiB/s [2024-12-09T17:14:25.025Z] 5747.00 IOPS, 718.38 MiB/s 00:25:01.984 Latency(us) 00:25:01.984 [2024-12-09T17:14:25.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.984 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:01.984 nvme0n1 : 2.00 5745.29 718.16 0.00 0.00 2780.49 688.73 11359.57 00:25:01.984 [2024-12-09T17:14:25.025Z] =================================================================================================================== 00:25:01.984 [2024-12-09T17:14:25.025Z] Total : 5745.29 718.16 0.00 0.00 2780.49 688.73 11359.57 00:25:01.984 { 00:25:01.984 "results": [ 00:25:01.984 { 00:25:01.984 "job": "nvme0n1", 00:25:01.984 "core_mask": "0x2", 00:25:01.984 "workload": "randread", 00:25:01.984 "status": "finished", 00:25:01.984 "queue_depth": 16, 00:25:01.984 "io_size": 131072, 00:25:01.984 "runtime": 2.00338, 00:25:01.984 "iops": 5745.29045912408, 00:25:01.984 "mibps": 718.16130739051, 00:25:01.984 "io_failed": 0, 00:25:01.984 "io_timeout": 0, 00:25:01.984 "avg_latency_us": 2780.492963928307, 00:25:01.984 "min_latency_us": 688.7348148148149, 00:25:01.984 "max_latency_us": 11359.573333333334 00:25:01.984 } 00:25:01.984 ], 00:25:01.984 "core_count": 1 00:25:01.984 } 00:25:01.984 18:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:01.984 18:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:01.984 18:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:01.984 18:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:01.984 | select(.opcode=="crc32c") 00:25:01.984 | "\(.module_name) \(.executed)"' 00:25:01.984 18:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:01.984 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:01.984 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:01.984 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:01.984 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:01.984 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1572827 00:25:01.984 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1572827 ']' 00:25:01.984 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1572827 00:25:01.984 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:01.984 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:01.984 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1572827 00:25:02.246 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:02.246 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:02.246 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1572827' 00:25:02.246 killing process with pid 1572827 00:25:02.246 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1572827 00:25:02.246 Received shutdown signal, test time was about 2.000000 seconds 00:25:02.246 00:25:02.246 Latency(us) 00:25:02.246 [2024-12-09T17:14:25.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.246 [2024-12-09T17:14:25.287Z] =================================================================================================================== 00:25:02.246 [2024-12-09T17:14:25.287Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:02.246 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1572827 00:25:02.246 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:02.246 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:02.246 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:02.246 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:02.246 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:02.246 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:02.246 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:02.246 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1573237 00:25:02.246 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:02.246 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1573237 /var/tmp/bperf.sock 00:25:02.246 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1573237 ']' 00:25:02.246 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:02.246 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.246 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:02.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:02.246 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.246 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:02.510 [2024-12-09 18:14:25.323762] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:25:02.510 [2024-12-09 18:14:25.323853] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1573237 ] 00:25:02.510 [2024-12-09 18:14:25.390937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.510 [2024-12-09 18:14:25.444974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.767 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:02.767 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:02.767 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:02.767 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:02.767 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:03.024 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:03.024 18:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:03.589 nvme0n1 00:25:03.589 18:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:03.589 18:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:03.589 Running I/O for 2 seconds... 00:25:05.895 19446.00 IOPS, 75.96 MiB/s [2024-12-09T17:14:28.936Z] 19091.00 IOPS, 74.57 MiB/s 00:25:05.895 Latency(us) 00:25:05.895 [2024-12-09T17:14:28.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.895 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:05.895 nvme0n1 : 2.01 19091.25 74.58 0.00 0.00 6689.93 2742.80 9709.04 00:25:05.895 [2024-12-09T17:14:28.936Z] =================================================================================================================== 00:25:05.895 [2024-12-09T17:14:28.936Z] Total : 19091.25 74.58 0.00 0.00 6689.93 2742.80 9709.04 00:25:05.895 { 00:25:05.895 "results": [ 00:25:05.895 { 00:25:05.895 "job": "nvme0n1", 00:25:05.895 "core_mask": "0x2", 00:25:05.895 "workload": "randwrite", 00:25:05.895 "status": "finished", 00:25:05.895 "queue_depth": 128, 00:25:05.895 "io_size": 4096, 00:25:05.895 "runtime": 2.006678, 00:25:05.895 "iops": 19091.254301886, 00:25:05.895 "mibps": 74.5752121167422, 00:25:05.895 "io_failed": 0, 00:25:05.895 "io_timeout": 0, 00:25:05.895 "avg_latency_us": 6689.929002581282, 00:25:05.895 "min_latency_us": 2742.8029629629627, 00:25:05.895 "max_latency_us": 9709.037037037036 00:25:05.895 } 00:25:05.895 ], 00:25:05.895 "core_count": 1 00:25:05.895 } 00:25:05.895 18:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:05.895 18:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:05.895 18:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:05.895 18:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:05.895 | select(.opcode=="crc32c") 00:25:05.895 | "\(.module_name) \(.executed)"' 00:25:05.895 18:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:05.895 18:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:05.895 18:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:05.895 18:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:05.895 18:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:05.895 18:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1573237 00:25:05.895 18:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1573237 ']' 00:25:05.895 18:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1573237 00:25:05.895 18:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:05.895 18:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:05.895 18:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1573237 00:25:05.895 18:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:05.895 18:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:05.895 18:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1573237' 00:25:05.895 killing process with pid 1573237 00:25:05.895 18:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1573237 00:25:05.895 Received shutdown signal, test time was about 2.000000 seconds 00:25:05.895 00:25:05.895 Latency(us) 00:25:05.895 [2024-12-09T17:14:28.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.895 [2024-12-09T17:14:28.936Z] =================================================================================================================== 00:25:05.895 [2024-12-09T17:14:28.936Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:05.895 18:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1573237 00:25:06.154 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:06.154 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:06.154 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:06.154 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:06.154 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:06.154 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:06.154 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:06.154 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1573647 00:25:06.155 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:06.155 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1573647 /var/tmp/bperf.sock 00:25:06.155 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1573647 ']' 00:25:06.155 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:06.155 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.155 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:06.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:06.155 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.155 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:06.155 [2024-12-09 18:14:29.122555] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:25:06.155 [2024-12-09 18:14:29.122636] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1573647 ] 00:25:06.155 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:06.155 Zero copy mechanism will not be used. 00:25:06.155 [2024-12-09 18:14:29.191039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.413 [2024-12-09 18:14:29.247947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.413 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.413 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:06.413 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:06.413 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:06.413 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:06.981 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:06.981 18:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:07.238 nvme0n1 00:25:07.238 18:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:07.239 18:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:07.239 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:07.239 Zero copy mechanism will not be used. 00:25:07.239 Running I/O for 2 seconds... 00:25:09.609 5738.00 IOPS, 717.25 MiB/s [2024-12-09T17:14:32.650Z] 5969.00 IOPS, 746.12 MiB/s 00:25:09.609 Latency(us) 00:25:09.609 [2024-12-09T17:14:32.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.609 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:09.609 nvme0n1 : 2.00 5966.91 745.86 0.00 0.00 2674.42 1747.63 4781.70 00:25:09.609 [2024-12-09T17:14:32.650Z] =================================================================================================================== 00:25:09.609 [2024-12-09T17:14:32.650Z] Total : 5966.91 745.86 0.00 0.00 2674.42 1747.63 4781.70 00:25:09.609 { 00:25:09.609 "results": [ 00:25:09.609 { 00:25:09.609 "job": "nvme0n1", 00:25:09.609 "core_mask": "0x2", 00:25:09.609 "workload": "randwrite", 00:25:09.609 "status": "finished", 00:25:09.609 "queue_depth": 16, 00:25:09.609 "io_size": 131072, 00:25:09.609 "runtime": 2.003885, 00:25:09.609 "iops": 5966.909278726074, 00:25:09.609 "mibps": 745.8636598407593, 00:25:09.609 "io_failed": 0, 00:25:09.609 "io_timeout": 0, 00:25:09.609 "avg_latency_us": 2674.4237587156445, 00:25:09.609 "min_latency_us": 1747.6266666666668, 00:25:09.609 "max_latency_us": 4781.700740740741 00:25:09.609 } 00:25:09.609 ], 00:25:09.609 "core_count": 1 00:25:09.609 } 00:25:09.609 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:09.609 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:09.609 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:09.609 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:09.609 | select(.opcode=="crc32c") 00:25:09.609 | "\(.module_name) \(.executed)"' 00:25:09.609 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:09.609 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:09.609 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:09.609 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:09.609 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:09.609 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1573647 00:25:09.609 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1573647 ']' 00:25:09.609 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1573647 00:25:09.609 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:09.609 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:09.609 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1573647 00:25:09.891 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:09.891 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:09.891 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1573647' 00:25:09.891 killing process with pid 1573647 00:25:09.891 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1573647 00:25:09.891 Received shutdown signal, test time was about 2.000000 seconds 00:25:09.891 00:25:09.891 Latency(us) 00:25:09.891 [2024-12-09T17:14:32.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.891 [2024-12-09T17:14:32.932Z] =================================================================================================================== 00:25:09.891 [2024-12-09T17:14:32.932Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:09.891 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1573647 00:25:09.891 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1572277 00:25:09.891 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1572277 ']' 00:25:09.891 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1572277 00:25:09.891 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:09.891 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:09.891 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1572277 00:25:09.891 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:09.891 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:09.891 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1572277' 00:25:09.891 killing process with pid 1572277 00:25:09.891 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1572277 00:25:09.891 18:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1572277 00:25:10.151 00:25:10.151 real 0m15.836s 00:25:10.151 user 0m31.807s 00:25:10.151 sys 0m4.243s 00:25:10.151 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:10.151 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:10.151 ************************************ 00:25:10.151 END TEST nvmf_digest_clean 00:25:10.151 ************************************ 00:25:10.151 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:10.151 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:10.151 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:10.151 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:10.151 ************************************ 00:25:10.151 START TEST nvmf_digest_error 00:25:10.151 ************************************ 00:25:10.151 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:25:10.151 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:10.151 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:10.151 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:10.151 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:10.411 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1574207 00:25:10.411 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:10.411 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1574207 00:25:10.411 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1574207 ']' 00:25:10.411 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.411 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:10.411 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.411 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:10.411 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:10.411 [2024-12-09 18:14:33.241570] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:25:10.411 [2024-12-09 18:14:33.241673] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.411 [2024-12-09 18:14:33.315301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.411 [2024-12-09 18:14:33.373577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:10.411 [2024-12-09 18:14:33.373640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:10.411 [2024-12-09 18:14:33.373670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:10.411 [2024-12-09 18:14:33.373682] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:10.411 [2024-12-09 18:14:33.373692] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:10.411 [2024-12-09 18:14:33.374326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:10.670 [2024-12-09 18:14:33.503112] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:10.670 null0 00:25:10.670 [2024-12-09 18:14:33.622952] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:10.670 [2024-12-09 18:14:33.647195] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1574228 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1574228 /var/tmp/bperf.sock 00:25:10.670 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1574228 ']' 00:25:10.671 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:10.671 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:10.671 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:10.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:10.671 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:10.671 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:10.671 [2024-12-09 18:14:33.694685] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:25:10.671 [2024-12-09 18:14:33.694763] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1574228 ] 00:25:10.930 [2024-12-09 18:14:33.760726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.930 [2024-12-09 18:14:33.822078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.189 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:11.189 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:11.189 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:11.189 18:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:11.446 18:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:11.446 18:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.447 18:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:11.447 18:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.447 18:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:11.447 18:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:11.704 nvme0n1 00:25:11.704 18:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:11.704 18:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.704 18:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:11.704 18:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.704 18:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:11.704 18:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:11.963 Running I/O for 2 seconds... 00:25:11.963 [2024-12-09 18:14:34.844169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:11.963 [2024-12-09 18:14:34.844234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.963 [2024-12-09 18:14:34.844256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.963 [2024-12-09 18:14:34.860649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:11.963 [2024-12-09 18:14:34.860699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.963 [2024-12-09 18:14:34.860716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.963 [2024-12-09 18:14:34.876681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:11.963 [2024-12-09 18:14:34.876715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.963 [2024-12-09 18:14:34.876733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.963 [2024-12-09 18:14:34.889391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:11.963 [2024-12-09 18:14:34.889425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.963 [2024-12-09 18:14:34.889443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.963 [2024-12-09 18:14:34.901439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:11.963 [2024-12-09 18:14:34.901468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.963 [2024-12-09 18:14:34.901499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.963 [2024-12-09 18:14:34.916482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:11.963 [2024-12-09 18:14:34.916514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.963 [2024-12-09 18:14:34.916531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.963 [2024-12-09 18:14:34.932080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:11.963 [2024-12-09 18:14:34.932109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.963 [2024-12-09 18:14:34.932140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.963 [2024-12-09 18:14:34.947232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:11.963 [2024-12-09 18:14:34.947263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.963 [2024-12-09 18:14:34.947296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.963 [2024-12-09 18:14:34.958917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:11.963 [2024-12-09 18:14:34.958948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.963 [2024-12-09 18:14:34.958981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.963 [2024-12-09 18:14:34.973814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:11.963 [2024-12-09 18:14:34.973845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.963 [2024-12-09 18:14:34.973862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.963 [2024-12-09 18:14:34.989056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:11.963 [2024-12-09 18:14:34.989089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.963 [2024-12-09 18:14:34.989106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.222 [2024-12-09 18:14:35.004198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.222 [2024-12-09 18:14:35.004228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.222 [2024-12-09 18:14:35.004260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.222 [2024-12-09 18:14:35.016801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.222 [2024-12-09 18:14:35.016832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.222 [2024-12-09 18:14:35.016849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.222 [2024-12-09 18:14:35.029482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.222 [2024-12-09 18:14:35.029527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.222 [2024-12-09 18:14:35.029554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.222 [2024-12-09 18:14:35.040803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.222 [2024-12-09 18:14:35.040846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.222 [2024-12-09 18:14:35.040862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.222 [2024-12-09 18:14:35.055668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.222 [2024-12-09 18:14:35.055704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.222 [2024-12-09 18:14:35.055720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.222 [2024-12-09 18:14:35.068428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.222 [2024-12-09 18:14:35.068475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.222 [2024-12-09 18:14:35.068491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.222 [2024-12-09 18:14:35.084835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.222 [2024-12-09 18:14:35.084866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.222 [2024-12-09 18:14:35.084884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.222 [2024-12-09 18:14:35.098114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.222 [2024-12-09 18:14:35.098144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.222 [2024-12-09 18:14:35.098182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.222 [2024-12-09 18:14:35.111022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.222 [2024-12-09 18:14:35.111051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.222 [2024-12-09 18:14:35.111082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.222 [2024-12-09 18:14:35.126505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.222 [2024-12-09 18:14:35.126533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.222 [2024-12-09 18:14:35.126554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.222 [2024-12-09 18:14:35.141027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.222 [2024-12-09 18:14:35.141059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.222 [2024-12-09 18:14:35.141075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.222 [2024-12-09 18:14:35.154219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.222 [2024-12-09 18:14:35.154249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.222 [2024-12-09 18:14:35.154266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.222 [2024-12-09 18:14:35.165357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.222 [2024-12-09 18:14:35.165396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.222 [2024-12-09 18:14:35.165426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.222 [2024-12-09 18:14:35.179175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.222 [2024-12-09 18:14:35.179202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.222 [2024-12-09 18:14:35.179232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.222 [2024-12-09 18:14:35.194993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.222 [2024-12-09 18:14:35.195021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.222 [2024-12-09 18:14:35.195052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.222 [2024-12-09 18:14:35.208486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.222 [2024-12-09 18:14:35.208530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.222 [2024-12-09 18:14:35.208570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.222 [2024-12-09 18:14:35.221899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.222 [2024-12-09 18:14:35.221935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.222 [2024-12-09 18:14:35.221969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.222 [2024-12-09 18:14:35.234355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.222 [2024-12-09 18:14:35.234383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.222 [2024-12-09 18:14:35.234398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.222 [2024-12-09 18:14:35.246943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.222 [2024-12-09 18:14:35.246971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.222 [2024-12-09 18:14:35.247002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.482 [2024-12-09 18:14:35.261715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.482 [2024-12-09 18:14:35.261744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.482 [2024-12-09 18:14:35.261761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.482 [2024-12-09 18:14:35.276110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.482 [2024-12-09 18:14:35.276141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.482 [2024-12-09 18:14:35.276159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.482 [2024-12-09 18:14:35.288072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.482 [2024-12-09 18:14:35.288099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.482 [2024-12-09 18:14:35.288129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.482 [2024-12-09 18:14:35.302266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.482 [2024-12-09 18:14:35.302293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.482 [2024-12-09 18:14:35.302323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.482 [2024-12-09 18:14:35.315996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.482 [2024-12-09 18:14:35.316040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.482 [2024-12-09 18:14:35.316056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.482 [2024-12-09 18:14:35.332492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.482 [2024-12-09 18:14:35.332522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.482 [2024-12-09 18:14:35.332565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.482 [2024-12-09 18:14:35.346677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.482 [2024-12-09 18:14:35.346708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.482 [2024-12-09 18:14:35.346725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.482 [2024-12-09 18:14:35.358006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.482 [2024-12-09 18:14:35.358033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.482 [2024-12-09 18:14:35.358064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.482 [2024-12-09 18:14:35.372171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.482 [2024-12-09 18:14:35.372203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.482 [2024-12-09 18:14:35.372221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.482 [2024-12-09 18:14:35.387575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.482 [2024-12-09 18:14:35.387606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.482 [2024-12-09 18:14:35.387623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.482 [2024-12-09 18:14:35.404262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.482 [2024-12-09 18:14:35.404290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.482 [2024-12-09 18:14:35.404322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.482 [2024-12-09 18:14:35.415121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.482 [2024-12-09 18:14:35.415149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.482 [2024-12-09 18:14:35.415179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.482 [2024-12-09 18:14:35.430012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.482 [2024-12-09 18:14:35.430039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.482 [2024-12-09 18:14:35.430070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.482 [2024-12-09 18:14:35.445750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.482 [2024-12-09 18:14:35.445779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.482 [2024-12-09 18:14:35.445794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.482 [2024-12-09 18:14:35.462490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.482 [2024-12-09 18:14:35.462518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.482 [2024-12-09 18:14:35.462561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.482 [2024-12-09 18:14:35.475701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.482 [2024-12-09 18:14:35.475732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.482 [2024-12-09 18:14:35.475749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.482 [2024-12-09 18:14:35.487224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.482 [2024-12-09 18:14:35.487252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.482 [2024-12-09 18:14:35.487268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.482 [2024-12-09 18:14:35.500518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.482 [2024-12-09 18:14:35.500573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.482 [2024-12-09 18:14:35.500605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.482 [2024-12-09 18:14:35.512804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.482 [2024-12-09 18:14:35.512831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.482 [2024-12-09 18:14:35.512860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.742 [2024-12-09 18:14:35.528286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.742 [2024-12-09 18:14:35.528316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.742 [2024-12-09 18:14:35.528349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.742 [2024-12-09 18:14:35.543081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.742 [2024-12-09 18:14:35.543110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.742 [2024-12-09 18:14:35.543141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.742 [2024-12-09 18:14:35.555233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.742 [2024-12-09 18:14:35.555260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.742 [2024-12-09 18:14:35.555290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.742 [2024-12-09 18:14:35.568913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.742 [2024-12-09 18:14:35.568940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.742 [2024-12-09 18:14:35.568971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.742 [2024-12-09 18:14:35.584211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.742 [2024-12-09 18:14:35.584239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.742 [2024-12-09 18:14:35.584269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.742 [2024-12-09 18:14:35.597397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.742 [2024-12-09 18:14:35.597427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.742 [2024-12-09 18:14:35.597459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.742 [2024-12-09 18:14:35.608942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.742 [2024-12-09 18:14:35.608972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.742 [2024-12-09 18:14:35.609004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.742 [2024-12-09 18:14:35.621786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.742 [2024-12-09 18:14:35.621831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.742 [2024-12-09 18:14:35.621847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.742 [2024-12-09 18:14:35.634892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.742 [2024-12-09 18:14:35.634938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.742 [2024-12-09 18:14:35.634954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.742 [2024-12-09 18:14:35.648993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.742 [2024-12-09 18:14:35.649022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.742 [2024-12-09 18:14:35.649054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.742 [2024-12-09 18:14:35.663783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.742 [2024-12-09 18:14:35.663813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.742 [2024-12-09 18:14:35.663830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.742 [2024-12-09 18:14:35.675064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.742 [2024-12-09 18:14:35.675091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.742 [2024-12-09 18:14:35.675123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.742 [2024-12-09 18:14:35.688682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.742 [2024-12-09 18:14:35.688725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.742 [2024-12-09 18:14:35.688748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.742 [2024-12-09 18:14:35.704703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.742 [2024-12-09 18:14:35.704747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.742 [2024-12-09 18:14:35.704763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.742 [2024-12-09 18:14:35.716351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.742 [2024-12-09 18:14:35.716379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.742 [2024-12-09 18:14:35.716409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.742 [2024-12-09 18:14:35.732117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.742 [2024-12-09 18:14:35.732148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.742 [2024-12-09 18:14:35.732165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.742 [2024-12-09 18:14:35.747161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.742 [2024-12-09 18:14:35.747189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.742 [2024-12-09 18:14:35.747220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.742 [2024-12-09 18:14:35.762232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.742 [2024-12-09 18:14:35.762261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.742 [2024-12-09 18:14:35.762293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.742 [2024-12-09 18:14:35.775641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:12.742 [2024-12-09 18:14:35.775675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.742 [2024-12-09 18:14:35.775693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.001 [2024-12-09 18:14:35.788429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.001 [2024-12-09 18:14:35.788459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.001 [2024-12-09 18:14:35.788491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.001 [2024-12-09 18:14:35.805156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.001 [2024-12-09 18:14:35.805185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.001 [2024-12-09 18:14:35.805216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.001 [2024-12-09 18:14:35.820620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.001 [2024-12-09 18:14:35.820655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.001 [2024-12-09 18:14:35.820672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.001 18234.00 IOPS, 71.23 MiB/s [2024-12-09T17:14:36.042Z] [2024-12-09 18:14:35.836360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.001 [2024-12-09 18:14:35.836407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.001 [2024-12-09 18:14:35.836425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.001 [2024-12-09 18:14:35.852279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.001 [2024-12-09 18:14:35.852308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.001 [2024-12-09 18:14:35.852339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.001 [2024-12-09 18:14:35.864887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.001 [2024-12-09 18:14:35.864917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.001 [2024-12-09 18:14:35.864933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.001 [2024-12-09 18:14:35.876143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.001 [2024-12-09 18:14:35.876173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.001 [2024-12-09 18:14:35.876191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.001 [2024-12-09 18:14:35.889468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.001 [2024-12-09 18:14:35.889513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.001 [2024-12-09 18:14:35.889530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.001 [2024-12-09 18:14:35.905768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.001 [2024-12-09 18:14:35.905798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.001 [2024-12-09 18:14:35.905814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.001 [2024-12-09 18:14:35.922114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.001 [2024-12-09 18:14:35.922143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.001 [2024-12-09 18:14:35.922175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.001 [2024-12-09 18:14:35.936109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.001 [2024-12-09 18:14:35.936139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.001 [2024-12-09 18:14:35.936156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.001 [2024-12-09 18:14:35.950692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.001 [2024-12-09 18:14:35.950723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.001 [2024-12-09 18:14:35.950740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.001 [2024-12-09 18:14:35.961673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.001 [2024-12-09 18:14:35.961702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.001 [2024-12-09 18:14:35.961717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.001 [2024-12-09 18:14:35.978232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.001 [2024-12-09 18:14:35.978262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.001 [2024-12-09 18:14:35.978293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.001 [2024-12-09 18:14:35.992563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.001 [2024-12-09 18:14:35.992593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.001 [2024-12-09 18:14:35.992609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.001 [2024-12-09 18:14:36.008991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.001 [2024-12-09 18:14:36.009019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.001 [2024-12-09 18:14:36.009051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.001 [2024-12-09 18:14:36.023585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.001 [2024-12-09 18:14:36.023616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.001 [2024-12-09 18:14:36.023633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.001 [2024-12-09 18:14:36.036198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.001 [2024-12-09 18:14:36.036228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.001 [2024-12-09 18:14:36.036260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.260 [2024-12-09 18:14:36.052881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.260 [2024-12-09 18:14:36.052913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.260 [2024-12-09 18:14:36.052931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.260 [2024-12-09 18:14:36.067374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.260 [2024-12-09 18:14:36.067406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.260 [2024-12-09 18:14:36.067432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.260 [2024-12-09 18:14:36.078296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.260 [2024-12-09 18:14:36.078326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.260 [2024-12-09 18:14:36.078357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.260 [2024-12-09 18:14:36.094520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.260 [2024-12-09 18:14:36.094573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.260 [2024-12-09 18:14:36.094591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.260 [2024-12-09 18:14:36.110398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.260 [2024-12-09 18:14:36.110427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.260 [2024-12-09 18:14:36.110458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.260 [2024-12-09 18:14:36.126982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.260 [2024-12-09 18:14:36.127012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.260 [2024-12-09 18:14:36.127043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.260 [2024-12-09 18:14:36.141550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.260 [2024-12-09 18:14:36.141582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.260 [2024-12-09 18:14:36.141610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.260 [2024-12-09 18:14:36.153304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.261 [2024-12-09 18:14:36.153336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.261 [2024-12-09 18:14:36.153353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.261 [2024-12-09 18:14:36.168221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.261 [2024-12-09 18:14:36.168252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.261 [2024-12-09 18:14:36.168286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.261 [2024-12-09 18:14:36.180140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.261 [2024-12-09 18:14:36.180183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.261 [2024-12-09 18:14:36.180201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.261 [2024-12-09 18:14:36.193676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.261 [2024-12-09 18:14:36.193712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.261 [2024-12-09 18:14:36.193730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.261 [2024-12-09 18:14:36.209035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.261 [2024-12-09 18:14:36.209065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.261 [2024-12-09 18:14:36.209082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.261 [2024-12-09 18:14:36.220489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.261 [2024-12-09 18:14:36.220517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.261 [2024-12-09 18:14:36.220555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.261 [2024-12-09 18:14:36.235997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.261 [2024-12-09 18:14:36.236025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.261 [2024-12-09 18:14:36.236057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.261 [2024-12-09 18:14:36.252489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.261 [2024-12-09 18:14:36.252517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.261 [2024-12-09 18:14:36.252553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.261 [2024-12-09 18:14:36.267515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.261 [2024-12-09 18:14:36.267567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.261 [2024-12-09 18:14:36.267584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.261 [2024-12-09 18:14:36.278856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.261 [2024-12-09 18:14:36.278887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.261 [2024-12-09 18:14:36.278904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.261 [2024-12-09 18:14:36.292961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.261 [2024-12-09 18:14:36.292992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.261 [2024-12-09 18:14:36.293010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.520 [2024-12-09 18:14:36.308368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.520 [2024-12-09 18:14:36.308396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.520 [2024-12-09 18:14:36.308434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.520 [2024-12-09 18:14:36.322574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.520 [2024-12-09 18:14:36.322605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.520 [2024-12-09 18:14:36.322622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.520 [2024-12-09 18:14:36.334114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.520 [2024-12-09 18:14:36.334142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.520 [2024-12-09 18:14:36.334174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.520 [2024-12-09 18:14:36.347726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.520 [2024-12-09 18:14:36.347755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.520 [2024-12-09 18:14:36.347771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.520 [2024-12-09 18:14:36.361980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.520 [2024-12-09 18:14:36.362026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.520 [2024-12-09 18:14:36.362043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.520 [2024-12-09 18:14:36.377135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.520 [2024-12-09 18:14:36.377180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.520 [2024-12-09 18:14:36.377198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.520 [2024-12-09 18:14:36.390949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.520 [2024-12-09 18:14:36.390981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.520 [2024-12-09 18:14:36.390998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.520 [2024-12-09 18:14:36.404912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.520 [2024-12-09 18:14:36.404942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.520 [2024-12-09 18:14:36.404959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.520 [2024-12-09 18:14:36.416004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.520 [2024-12-09 18:14:36.416032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.520 [2024-12-09 18:14:36.416064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.520 [2024-12-09 18:14:36.431624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.520 [2024-12-09 18:14:36.431663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.520 [2024-12-09 18:14:36.431681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.520 [2024-12-09 18:14:36.447460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.520 [2024-12-09 18:14:36.447488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.520 [2024-12-09 18:14:36.447519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.520 [2024-12-09 18:14:36.463796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.520 [2024-12-09 18:14:36.463826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.520 [2024-12-09 18:14:36.463842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.520 [2024-12-09 18:14:36.479860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.520 [2024-12-09 18:14:36.479890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.520 [2024-12-09 18:14:36.479906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.520 [2024-12-09 18:14:36.496082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.520 [2024-12-09 18:14:36.496114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.520 [2024-12-09 18:14:36.496131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.520 [2024-12-09 18:14:36.509887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.520 [2024-12-09 18:14:36.509918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.520 [2024-12-09 18:14:36.509934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.520 [2024-12-09 18:14:36.521194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.520 [2024-12-09 18:14:36.521221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.520 [2024-12-09 18:14:36.521252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.520 [2024-12-09 18:14:36.537700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.520 [2024-12-09 18:14:36.537731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.520 [2024-12-09 18:14:36.537748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.520 [2024-12-09 18:14:36.551269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.520 [2024-12-09 18:14:36.551299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.520 [2024-12-09 18:14:36.551330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.779 [2024-12-09 18:14:36.568387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.779 [2024-12-09 18:14:36.568415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.779 [2024-12-09 18:14:36.568447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.779 [2024-12-09 18:14:36.579673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.779 [2024-12-09 18:14:36.579702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.779 [2024-12-09 18:14:36.579717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.779 [2024-12-09 18:14:36.594820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.779 [2024-12-09 18:14:36.594866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.779 [2024-12-09 18:14:36.594882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.779 [2024-12-09 18:14:36.607251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.779 [2024-12-09 18:14:36.607279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.779 [2024-12-09 18:14:36.607311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.779 [2024-12-09 18:14:36.619338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.779 [2024-12-09 18:14:36.619366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.779 [2024-12-09 18:14:36.619396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.779 [2024-12-09 18:14:36.632725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.779 [2024-12-09 18:14:36.632753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.779 [2024-12-09 18:14:36.632769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.779 [2024-12-09 18:14:36.647512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.779 [2024-12-09 18:14:36.647541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.779 [2024-12-09 18:14:36.647582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.779 [2024-12-09 18:14:36.663227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.779 [2024-12-09 18:14:36.663255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.779 [2024-12-09 18:14:36.663287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.779 [2024-12-09 18:14:36.676374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.779 [2024-12-09 18:14:36.676402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.779 [2024-12-09 18:14:36.676445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.779 [2024-12-09 18:14:36.689418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.779 [2024-12-09 18:14:36.689447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.779 [2024-12-09 18:14:36.689478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.779 [2024-12-09 18:14:36.703958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.779 [2024-12-09 18:14:36.703986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.779 [2024-12-09 18:14:36.704019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.779 [2024-12-09 18:14:36.719215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.779 [2024-12-09 18:14:36.719259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.779 [2024-12-09 18:14:36.719275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.779 [2024-12-09 18:14:36.733738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.779 [2024-12-09 18:14:36.733768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.779 [2024-12-09 18:14:36.733786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.779 [2024-12-09 18:14:36.745329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.779 [2024-12-09 18:14:36.745361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.779 [2024-12-09 18:14:36.745377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.779 [2024-12-09 18:14:36.760462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.779 [2024-12-09 18:14:36.760491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.779 [2024-12-09 18:14:36.760522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.779 [2024-12-09 18:14:36.776353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.779 [2024-12-09 18:14:36.776397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.779 [2024-12-09 18:14:36.776412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.779 [2024-12-09 18:14:36.792720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.779 [2024-12-09 18:14:36.792749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.779 [2024-12-09 18:14:36.792765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.779 [2024-12-09 18:14:36.808056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:13.779 [2024-12-09 18:14:36.808090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.779 [2024-12-09 18:14:36.808120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.037 [2024-12-09 18:14:36.824110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:14.037 [2024-12-09 18:14:36.824139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.037 [2024-12-09 18:14:36.824171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.037 18082.50 IOPS, 70.63 MiB/s [2024-12-09T17:14:37.078Z] [2024-12-09 18:14:36.838220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe06420) 00:25:14.037 [2024-12-09 18:14:36.838250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.037 [2024-12-09 18:14:36.838283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.037 00:25:14.037 Latency(us) 00:25:14.037 [2024-12-09T17:14:37.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.037 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:14.037 nvme0n1 : 2.05 17712.60 69.19 0.00 0.00 7078.46 3495.25 47574.28 00:25:14.037 [2024-12-09T17:14:37.078Z] =================================================================================================================== 00:25:14.037 [2024-12-09T17:14:37.078Z] Total : 17712.60 69.19 0.00 0.00 7078.46 3495.25 47574.28 00:25:14.037 { 00:25:14.037 "results": [ 00:25:14.037 { 00:25:14.037 "job": "nvme0n1", 00:25:14.037 "core_mask": "0x2", 00:25:14.037 "workload": "randread", 00:25:14.037 "status": "finished", 00:25:14.037 "queue_depth": 128, 00:25:14.037 "io_size": 4096, 00:25:14.037 "runtime": 2.048993, 00:25:14.037 "iops": 17712.603215335534, 00:25:14.037 "mibps": 69.18985630990443, 00:25:14.037 "io_failed": 0, 00:25:14.037 "io_timeout": 0, 00:25:14.037 "avg_latency_us": 7078.464635420971, 00:25:14.037 "min_latency_us": 3495.2533333333336, 00:25:14.037 "max_latency_us": 47574.281481481485 00:25:14.037 } 00:25:14.037 ], 00:25:14.037 "core_count": 1 00:25:14.037 } 00:25:14.037 18:14:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:14.037 18:14:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:14.037 18:14:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:14.037 18:14:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:14.037 | .driver_specific 00:25:14.037 | .nvme_error 00:25:14.037 | .status_code 00:25:14.037 | .command_transient_transport_error' 00:25:14.295 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 142 > 0 )) 00:25:14.295 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1574228 00:25:14.295 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1574228 ']' 00:25:14.295 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1574228 00:25:14.295 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:14.295 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:14.295 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1574228 00:25:14.295 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:14.295 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:14.295 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1574228' 00:25:14.295 killing process with pid 1574228 00:25:14.295 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1574228 00:25:14.295 Received shutdown signal, test time was about 2.000000 seconds 00:25:14.295 00:25:14.295 Latency(us) 00:25:14.295 [2024-12-09T17:14:37.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.295 [2024-12-09T17:14:37.336Z] =================================================================================================================== 00:25:14.295 [2024-12-09T17:14:37.336Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:14.295 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1574228 00:25:14.553 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:14.553 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:14.553 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:14.553 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:14.553 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:14.553 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1574758 00:25:14.553 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:14.553 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1574758 /var/tmp/bperf.sock 00:25:14.553 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1574758 ']' 00:25:14.553 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:14.553 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.553 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:14.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:14.553 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.553 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:14.553 [2024-12-09 18:14:37.469376] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:25:14.553 [2024-12-09 18:14:37.469453] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1574758 ] 00:25:14.553 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:14.553 Zero copy mechanism will not be used. 00:25:14.553 [2024-12-09 18:14:37.534721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.553 [2024-12-09 18:14:37.589748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.811 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.811 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:14.811 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:14.811 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:15.068 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:15.068 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.068 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:15.068 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.068 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:15.068 18:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:15.326 nvme0n1 00:25:15.326 18:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:15.326 18:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.326 18:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:15.326 18:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.326 18:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:15.326 18:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:15.584 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:15.584 Zero copy mechanism will not be used. 00:25:15.584 Running I/O for 2 seconds... 00:25:15.584 [2024-12-09 18:14:38.444578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.584 [2024-12-09 18:14:38.444631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.584 [2024-12-09 18:14:38.444653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.584 [2024-12-09 18:14:38.449653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.584 [2024-12-09 18:14:38.449688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.584 [2024-12-09 18:14:38.449706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.584 [2024-12-09 18:14:38.455030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.584 [2024-12-09 18:14:38.455078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.584 [2024-12-09 18:14:38.455095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.584 [2024-12-09 18:14:38.461808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.584 [2024-12-09 18:14:38.461841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.584 [2024-12-09 18:14:38.461859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.584 [2024-12-09 18:14:38.467209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.584 [2024-12-09 18:14:38.467254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.467299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.472553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.472584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.472601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.477602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.477634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.477652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.482255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.482286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.482302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.486828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.486859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.486876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.491446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.491476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.491493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.497207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.497237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.497255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.504487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.504519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.504541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.509994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.510026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.510043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.515202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.515239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.515257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.520998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.521030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.521047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.524115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.524145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.524163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.529320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.529351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.529384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.535250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.535281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.535297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.540748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.540780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.540797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.547343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.547373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.547389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.553446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.553477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.553509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.558690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.558721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.558737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.564370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.564402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.564419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.571155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.571186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.571204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.576586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.576625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.576642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.582622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.582655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.582673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.588895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.588926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.588944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.594193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.594239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.594256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.600188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.600220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.600237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.606222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.606273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.606290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.612149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.612180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.612203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.585 [2024-12-09 18:14:38.617357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.585 [2024-12-09 18:14:38.617388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.585 [2024-12-09 18:14:38.617405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.586 [2024-12-09 18:14:38.623104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.586 [2024-12-09 18:14:38.623136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.586 [2024-12-09 18:14:38.623154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.843 [2024-12-09 18:14:38.629206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.843 [2024-12-09 18:14:38.629238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-12-09 18:14:38.629255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.843 [2024-12-09 18:14:38.635123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.843 [2024-12-09 18:14:38.635154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-12-09 18:14:38.635172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.843 [2024-12-09 18:14:38.640926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.843 [2024-12-09 18:14:38.640957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-12-09 18:14:38.640989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.843 [2024-12-09 18:14:38.646813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.843 [2024-12-09 18:14:38.646844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-12-09 18:14:38.646861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.843 [2024-12-09 18:14:38.652440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.652471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.652488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.658376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.658407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.658440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.663812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.663849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.663867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.668835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.668865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.668882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.673377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.673407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.673424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.676240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.676270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.676287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.681142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.681173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.681191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.686202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.686232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.686269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.691133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.691179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.691196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.696480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.696511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.696528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.703716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.703749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.703766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.711347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.711378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.711410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.718973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.719019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.719035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.726996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.727025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.727055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.734746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.734778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.734795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.742677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.742709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.742726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.750612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.750643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.750660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.754910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.754957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.754973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.762193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.762237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.762254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.770217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.770247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.770284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.778112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.778143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.778160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.786301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.786332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.786365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.793021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.793066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.793083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.798023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.798068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.798086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.802666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.802711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.802727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.807262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.807306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.807323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.811754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.811783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.811800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.816214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.816243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.816275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.820791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.820837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.820854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.825600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.825644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.825660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.830217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.830246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.830261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.834877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.834906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.834938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.839619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.839648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-12-09 18:14:38.839681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.844 [2024-12-09 18:14:38.844289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.844 [2024-12-09 18:14:38.844332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-12-09 18:14:38.844347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.845 [2024-12-09 18:14:38.848731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.845 [2024-12-09 18:14:38.848760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-12-09 18:14:38.848776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.845 [2024-12-09 18:14:38.853111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.845 [2024-12-09 18:14:38.853140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-12-09 18:14:38.853171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.845 [2024-12-09 18:14:38.857621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.845 [2024-12-09 18:14:38.857651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-12-09 18:14:38.857673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.845 [2024-12-09 18:14:38.862631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.845 [2024-12-09 18:14:38.862674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-12-09 18:14:38.862691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.845 [2024-12-09 18:14:38.869267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.845 [2024-12-09 18:14:38.869297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-12-09 18:14:38.869329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.845 [2024-12-09 18:14:38.876791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:15.845 [2024-12-09 18:14:38.876836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-12-09 18:14:38.876853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:38.884441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:38.884473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:38.884490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:38.892011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:38.892042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:38.892075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:38.899847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:38.899895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:38.899912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:38.907932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:38.907970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:38.908001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:38.915945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:38.915992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:38.916009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:38.923765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:38.923804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:38.923823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:38.930730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:38.930763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:38.930780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:38.938489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:38.938520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:38.938563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:38.946305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:38.946336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:38.946368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:38.954064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:38.954096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:38.954114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:38.959942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:38.959974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:38.959992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:38.965022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:38.965053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:38.965070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:38.969403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:38.969433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:38.969449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:38.973918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:38.973948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:38.973964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:38.978426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:38.978456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:38.978472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:38.982930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:38.982961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:38.982978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:38.987682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:38.987712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:38.987729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:38.993520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:38.993557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:38.993577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:38.997374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:38.997405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:38.997422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:39.003178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:39.003223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:39.003240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:39.009222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:39.009269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:39.009286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:39.015347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:39.015392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:39.015409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:39.020908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:39.020941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:39.020964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:39.026693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:39.026725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:39.026743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:39.033238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:39.033270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:39.033287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:39.039512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:39.039551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:39.039570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:39.045177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:39.045209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:39.045227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:39.049054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:39.049084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:39.049116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:39.054774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:39.054806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:39.054823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:39.060845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.102 [2024-12-09 18:14:39.060875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.102 [2024-12-09 18:14:39.060892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.102 [2024-12-09 18:14:39.068412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.103 [2024-12-09 18:14:39.068456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.103 [2024-12-09 18:14:39.068474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.103 [2024-12-09 18:14:39.074016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.103 [2024-12-09 18:14:39.074051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.103 [2024-12-09 18:14:39.074083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.103 [2024-12-09 18:14:39.079493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.103 [2024-12-09 18:14:39.079524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.103 [2024-12-09 18:14:39.079540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.103 [2024-12-09 18:14:39.085085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.103 [2024-12-09 18:14:39.085131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.103 [2024-12-09 18:14:39.085148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.103 [2024-12-09 18:14:39.089927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.103 [2024-12-09 18:14:39.089973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.103 [2024-12-09 18:14:39.089990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.103 [2024-12-09 18:14:39.094499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.103 [2024-12-09 18:14:39.094550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.103 [2024-12-09 18:14:39.094568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.103 [2024-12-09 18:14:39.098975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.103 [2024-12-09 18:14:39.099004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.103 [2024-12-09 18:14:39.099021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.103 [2024-12-09 18:14:39.103479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.103 [2024-12-09 18:14:39.103524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.103 [2024-12-09 18:14:39.103541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.103 [2024-12-09 18:14:39.107950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.103 [2024-12-09 18:14:39.107980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.103 [2024-12-09 18:14:39.107996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.103 [2024-12-09 18:14:39.112518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.103 [2024-12-09 18:14:39.112573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.103 [2024-12-09 18:14:39.112592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.103 [2024-12-09 18:14:39.117047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.103 [2024-12-09 18:14:39.117076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.103 [2024-12-09 18:14:39.117092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.103 [2024-12-09 18:14:39.121644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.103 [2024-12-09 18:14:39.121673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.103 [2024-12-09 18:14:39.121689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.103 [2024-12-09 18:14:39.126469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.103 [2024-12-09 18:14:39.126511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.103 [2024-12-09 18:14:39.126529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.103 [2024-12-09 18:14:39.131641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.103 [2024-12-09 18:14:39.131687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.103 [2024-12-09 18:14:39.131704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.103 [2024-12-09 18:14:39.136736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.103 [2024-12-09 18:14:39.136781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.103 [2024-12-09 18:14:39.136798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.361 [2024-12-09 18:14:39.141575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.361 [2024-12-09 18:14:39.141606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.361 [2024-12-09 18:14:39.141623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.361 [2024-12-09 18:14:39.146245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.361 [2024-12-09 18:14:39.146274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.361 [2024-12-09 18:14:39.146290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.361 [2024-12-09 18:14:39.150817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.361 [2024-12-09 18:14:39.150846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.361 [2024-12-09 18:14:39.150863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.361 [2024-12-09 18:14:39.156143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.361 [2024-12-09 18:14:39.156174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.361 [2024-12-09 18:14:39.156197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.361 [2024-12-09 18:14:39.163206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.361 [2024-12-09 18:14:39.163237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.361 [2024-12-09 18:14:39.163273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.361 [2024-12-09 18:14:39.169522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.361 [2024-12-09 18:14:39.169575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.361 [2024-12-09 18:14:39.169592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.361 [2024-12-09 18:14:39.175143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.361 [2024-12-09 18:14:39.175173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.361 [2024-12-09 18:14:39.175205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.361 [2024-12-09 18:14:39.180524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.361 [2024-12-09 18:14:39.180579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.361 [2024-12-09 18:14:39.180596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.361 [2024-12-09 18:14:39.185008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.361 [2024-12-09 18:14:39.185054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.361 [2024-12-09 18:14:39.185072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.361 [2024-12-09 18:14:39.190769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.361 [2024-12-09 18:14:39.190800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.361 [2024-12-09 18:14:39.190818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.361 [2024-12-09 18:14:39.195348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.361 [2024-12-09 18:14:39.195392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.361 [2024-12-09 18:14:39.195409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.361 [2024-12-09 18:14:39.199910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.361 [2024-12-09 18:14:39.199940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.361 [2024-12-09 18:14:39.199956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.361 [2024-12-09 18:14:39.204459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.361 [2024-12-09 18:14:39.204489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.361 [2024-12-09 18:14:39.204505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.361 [2024-12-09 18:14:39.209225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.361 [2024-12-09 18:14:39.209272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.361 [2024-12-09 18:14:39.209289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.361 [2024-12-09 18:14:39.213748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.361 [2024-12-09 18:14:39.213779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.361 [2024-12-09 18:14:39.213796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.361 [2024-12-09 18:14:39.218422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.361 [2024-12-09 18:14:39.218453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.361 [2024-12-09 18:14:39.218470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.361 [2024-12-09 18:14:39.223972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.361 [2024-12-09 18:14:39.224002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.361 [2024-12-09 18:14:39.224020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.361 [2024-12-09 18:14:39.231729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.361 [2024-12-09 18:14:39.231760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.361 [2024-12-09 18:14:39.231778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.361 [2024-12-09 18:14:39.238006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.361 [2024-12-09 18:14:39.238038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.361 [2024-12-09 18:14:39.238054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.361 [2024-12-09 18:14:39.244533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.361 [2024-12-09 18:14:39.244586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.361 [2024-12-09 18:14:39.244605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.361 [2024-12-09 18:14:39.249417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.249450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.249473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.253179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.253208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.253241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.259166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.259212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.259230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.264496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.264526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.264568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.269640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.269671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.269688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.275100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.275130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.275162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.280971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.281001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.281032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.287058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.287090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.287107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.292213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.292257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.292275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.296951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.296986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.297018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.302006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.302036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.302069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.306643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.306675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.306692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.311277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.311308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.311324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.316023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.316053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.316085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.321219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.321264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.321281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.327197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.327228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.327261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.332254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.332284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.332318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.337947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.337976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.338007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.343580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.343610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.343626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.348661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.348692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.348709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.352631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.352662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.352679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.356919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.356950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.356966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.361416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.361445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.361463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.365913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.362 [2024-12-09 18:14:39.365944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.362 [2024-12-09 18:14:39.365961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.362 [2024-12-09 18:14:39.370406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.363 [2024-12-09 18:14:39.370449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.363 [2024-12-09 18:14:39.370466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.363 [2024-12-09 18:14:39.374903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.363 [2024-12-09 18:14:39.374933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.363 [2024-12-09 18:14:39.374949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.363 [2024-12-09 18:14:39.379318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.363 [2024-12-09 18:14:39.379347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.363 [2024-12-09 18:14:39.379370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.363 [2024-12-09 18:14:39.383759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.363 [2024-12-09 18:14:39.383790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.363 [2024-12-09 18:14:39.383806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.363 [2024-12-09 18:14:39.388266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.363 [2024-12-09 18:14:39.388310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.363 [2024-12-09 18:14:39.388328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.363 [2024-12-09 18:14:39.392856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.363 [2024-12-09 18:14:39.392885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.363 [2024-12-09 18:14:39.392902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.363 [2024-12-09 18:14:39.397507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.363 [2024-12-09 18:14:39.397538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.363 [2024-12-09 18:14:39.397566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.623 [2024-12-09 18:14:39.402072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.623 [2024-12-09 18:14:39.402103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-12-09 18:14:39.402119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.623 [2024-12-09 18:14:39.406601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.623 [2024-12-09 18:14:39.406630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-12-09 18:14:39.406647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.623 [2024-12-09 18:14:39.412183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.623 [2024-12-09 18:14:39.412214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-12-09 18:14:39.412231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.623 [2024-12-09 18:14:39.419615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.623 [2024-12-09 18:14:39.419645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-12-09 18:14:39.419662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.623 [2024-12-09 18:14:39.426084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.623 [2024-12-09 18:14:39.426122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-12-09 18:14:39.426141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.623 [2024-12-09 18:14:39.433208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.623 [2024-12-09 18:14:39.433240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-12-09 18:14:39.433258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.623 5530.00 IOPS, 691.25 MiB/s [2024-12-09T17:14:39.664Z] [2024-12-09 18:14:39.441533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.623 [2024-12-09 18:14:39.441573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-12-09 18:14:39.441591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.623 [2024-12-09 18:14:39.447212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.623 [2024-12-09 18:14:39.447244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-12-09 18:14:39.447262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.623 [2024-12-09 18:14:39.453445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.623 [2024-12-09 18:14:39.453477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-12-09 18:14:39.453494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.623 [2024-12-09 18:14:39.458922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.623 [2024-12-09 18:14:39.458955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-12-09 18:14:39.458973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.623 [2024-12-09 18:14:39.464437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.623 [2024-12-09 18:14:39.464470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-12-09 18:14:39.464487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.623 [2024-12-09 18:14:39.469006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.623 [2024-12-09 18:14:39.469038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-12-09 18:14:39.469056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.623 [2024-12-09 18:14:39.472482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.623 [2024-12-09 18:14:39.472526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-12-09 18:14:39.472559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.476340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.476369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.476401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.480718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.480748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.480766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.485006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.485038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.485070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.489423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.489452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.489483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.493801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.493831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.493862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.498149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.498177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.498208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.503356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.503385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.503417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.507144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.507173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.507206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.512229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.512281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.512299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.518880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.518911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.518942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.526567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.526598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.526615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.533966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.533996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.534029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.541252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.541298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.541314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.549058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.549087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.549119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.556680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.556727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.556744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.564196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.564227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.564258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.571636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.571669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.571686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.579284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.579332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.579350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.586694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.586728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.586746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.594899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.594931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.594964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.601632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.601665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.601683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.609472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.609518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.609536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.617176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.617207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.617241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.625039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.625070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.625104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.633342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.633389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.633406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.640699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.640731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.640757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.646225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.646257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.646274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.650599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.650630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.650647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.655717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.655747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.655764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.624 [2024-12-09 18:14:39.659537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.624 [2024-12-09 18:14:39.659576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-12-09 18:14:39.659593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.885 [2024-12-09 18:14:39.663970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.885 [2024-12-09 18:14:39.664000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.885 [2024-12-09 18:14:39.664033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.885 [2024-12-09 18:14:39.668540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.885 [2024-12-09 18:14:39.668578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.885 [2024-12-09 18:14:39.668595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.885 [2024-12-09 18:14:39.674572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.885 [2024-12-09 18:14:39.674612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.885 [2024-12-09 18:14:39.674631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.885 [2024-12-09 18:14:39.681581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.885 [2024-12-09 18:14:39.681621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.885 [2024-12-09 18:14:39.681638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.885 [2024-12-09 18:14:39.687400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.885 [2024-12-09 18:14:39.687440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.885 [2024-12-09 18:14:39.687458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.885 [2024-12-09 18:14:39.694254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.885 [2024-12-09 18:14:39.694300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.885 [2024-12-09 18:14:39.694319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.885 [2024-12-09 18:14:39.698493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.885 [2024-12-09 18:14:39.698525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.885 [2024-12-09 18:14:39.698541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.885 [2024-12-09 18:14:39.702951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.885 [2024-12-09 18:14:39.702982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.885 [2024-12-09 18:14:39.703014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.885 [2024-12-09 18:14:39.707525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.885 [2024-12-09 18:14:39.707567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.885 [2024-12-09 18:14:39.707585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.885 [2024-12-09 18:14:39.711894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.711928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.711948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.716269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.716302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.716319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.720523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.720563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.720582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.724871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.724916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.724931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.729284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.729314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.729331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.733735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.733766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.733783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.739087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.739120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.739137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.744525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.744581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.744600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.748127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.748157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.748174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.753376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.753406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.753424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.760252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.760284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.760301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.766543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.766584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.766602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.772924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.772957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.772983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.778968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.778999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.779017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.784910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.784942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.784960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.791112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.791143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.791175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.795631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.795663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.795680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.800835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.800867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.800884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.805866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.805897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.805914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.811249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.811280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.811297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.815750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.815780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.815797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.819507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.819538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.819563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.824567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.824598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.824615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.828955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.828985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.829003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.836054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.886 [2024-12-09 18:14:39.836100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.886 [2024-12-09 18:14:39.836118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.886 [2024-12-09 18:14:39.842216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.887 [2024-12-09 18:14:39.842248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.887 [2024-12-09 18:14:39.842266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.887 [2024-12-09 18:14:39.848477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.887 [2024-12-09 18:14:39.848509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.887 [2024-12-09 18:14:39.848527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.887 [2024-12-09 18:14:39.854686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.887 [2024-12-09 18:14:39.854718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.887 [2024-12-09 18:14:39.854736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.887 [2024-12-09 18:14:39.860934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.887 [2024-12-09 18:14:39.860980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.887 [2024-12-09 18:14:39.860997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.887 [2024-12-09 18:14:39.866154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.887 [2024-12-09 18:14:39.866185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.887 [2024-12-09 18:14:39.866209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.887 [2024-12-09 18:14:39.871079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.887 [2024-12-09 18:14:39.871110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.887 [2024-12-09 18:14:39.871127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.887 [2024-12-09 18:14:39.876069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.887 [2024-12-09 18:14:39.876115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.887 [2024-12-09 18:14:39.876132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.887 [2024-12-09 18:14:39.880839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.887 [2024-12-09 18:14:39.880871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.887 [2024-12-09 18:14:39.880889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.887 [2024-12-09 18:14:39.885756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.887 [2024-12-09 18:14:39.885787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.887 [2024-12-09 18:14:39.885804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.887 [2024-12-09 18:14:39.890204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.887 [2024-12-09 18:14:39.890235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.887 [2024-12-09 18:14:39.890252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.887 [2024-12-09 18:14:39.895749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.887 [2024-12-09 18:14:39.895781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.887 [2024-12-09 18:14:39.895798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.887 [2024-12-09 18:14:39.900269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.887 [2024-12-09 18:14:39.900301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.887 [2024-12-09 18:14:39.900318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.887 [2024-12-09 18:14:39.905593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.887 [2024-12-09 18:14:39.905627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.887 [2024-12-09 18:14:39.905644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.887 [2024-12-09 18:14:39.911535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.887 [2024-12-09 18:14:39.911593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.887 [2024-12-09 18:14:39.911612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.887 [2024-12-09 18:14:39.919300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:16.887 [2024-12-09 18:14:39.919333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.887 [2024-12-09 18:14:39.919350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.148 [2024-12-09 18:14:39.925607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.148 [2024-12-09 18:14:39.925639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.148 [2024-12-09 18:14:39.925656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.148 [2024-12-09 18:14:39.932506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.148 [2024-12-09 18:14:39.932559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.148 [2024-12-09 18:14:39.932603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.148 [2024-12-09 18:14:39.937791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.148 [2024-12-09 18:14:39.937822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.148 [2024-12-09 18:14:39.937848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.148 [2024-12-09 18:14:39.943286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.148 [2024-12-09 18:14:39.943318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.148 [2024-12-09 18:14:39.943335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.148 [2024-12-09 18:14:39.949057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.148 [2024-12-09 18:14:39.949102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.148 [2024-12-09 18:14:39.949118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.148 [2024-12-09 18:14:39.955161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.148 [2024-12-09 18:14:39.955193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.148 [2024-12-09 18:14:39.955210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.148 [2024-12-09 18:14:39.960737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.148 [2024-12-09 18:14:39.960768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.148 [2024-12-09 18:14:39.960785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.148 [2024-12-09 18:14:39.966755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.148 [2024-12-09 18:14:39.966786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.148 [2024-12-09 18:14:39.966805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.148 [2024-12-09 18:14:39.971801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.148 [2024-12-09 18:14:39.971835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.148 [2024-12-09 18:14:39.971853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.148 [2024-12-09 18:14:39.977250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.148 [2024-12-09 18:14:39.977282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.148 [2024-12-09 18:14:39.977300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.148 [2024-12-09 18:14:39.983202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.148 [2024-12-09 18:14:39.983234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.148 [2024-12-09 18:14:39.983251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.148 [2024-12-09 18:14:39.989165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.148 [2024-12-09 18:14:39.989213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:39.989230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:39.995324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:39.995356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:39.995374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.001410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.001445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.001464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.006596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.006635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.006653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.013078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.013126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.013159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.018958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.018995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.019014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.025027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.025066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.025085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.030686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.030723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.030742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.036078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.036115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.036134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.039511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.039555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.039577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.044080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.044113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.044132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.048165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.048196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.048213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.050948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.050978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.050995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.055394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.055426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.055443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.060307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.060340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.060357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.066204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.066237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.066255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.071365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.071397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.071415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.075944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.075975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.075991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.080584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.080614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.080631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.085257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.085289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.085306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.090954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.090986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.091004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.098505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.098538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.098575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.104132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.104164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.104182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.110001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.110032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.110050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.114852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.114883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.114901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.149 [2024-12-09 18:14:40.119428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.149 [2024-12-09 18:14:40.119458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.149 [2024-12-09 18:14:40.119474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.150 [2024-12-09 18:14:40.124821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.150 [2024-12-09 18:14:40.124852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.150 [2024-12-09 18:14:40.124870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.150 [2024-12-09 18:14:40.131655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.150 [2024-12-09 18:14:40.131687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.150 [2024-12-09 18:14:40.131704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.150 [2024-12-09 18:14:40.138180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.150 [2024-12-09 18:14:40.138212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.150 [2024-12-09 18:14:40.138229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.150 [2024-12-09 18:14:40.144251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.150 [2024-12-09 18:14:40.144283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.150 [2024-12-09 18:14:40.144300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.150 [2024-12-09 18:14:40.151308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.150 [2024-12-09 18:14:40.151349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.150 [2024-12-09 18:14:40.151367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.150 [2024-12-09 18:14:40.157919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.150 [2024-12-09 18:14:40.157951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.150 [2024-12-09 18:14:40.157970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.150 [2024-12-09 18:14:40.165045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.150 [2024-12-09 18:14:40.165077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.150 [2024-12-09 18:14:40.165093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.150 [2024-12-09 18:14:40.171251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.150 [2024-12-09 18:14:40.171284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.150 [2024-12-09 18:14:40.171301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.150 [2024-12-09 18:14:40.177324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.150 [2024-12-09 18:14:40.177356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.150 [2024-12-09 18:14:40.177375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.150 [2024-12-09 18:14:40.181268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.150 [2024-12-09 18:14:40.181299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.150 [2024-12-09 18:14:40.181316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.186497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.186531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.186556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.192586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.192619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.192636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.199589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.199621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.199640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.206471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.206503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.206520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.212234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.212266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.212283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.217724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.217756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.217774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.222578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.222611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.222628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.225811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.225844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.225861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.230514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.230554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.230573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.236129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.236161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.236179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.241823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.241854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.241872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.247728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.247760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.247786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.254214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.254246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.254263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.259933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.259980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.259997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.265863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.265895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.265912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.271624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.271655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.271672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.277440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.277471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.277489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.282678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.282708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.282726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.288249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.288281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.288299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.294651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.294682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.294700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.300271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.300310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.300328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.305624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.305656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.305674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.412 [2024-12-09 18:14:40.312961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.412 [2024-12-09 18:14:40.312993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.412 [2024-12-09 18:14:40.313011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.413 [2024-12-09 18:14:40.319164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.319195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.319213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.413 [2024-12-09 18:14:40.323127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.323158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.323190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.413 [2024-12-09 18:14:40.328905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.328937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.328954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.413 [2024-12-09 18:14:40.333892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.333922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.333939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.413 [2024-12-09 18:14:40.338960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.338992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.339010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.413 [2024-12-09 18:14:40.343427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.343457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.343473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.413 [2024-12-09 18:14:40.348937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.348969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.348986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.413 [2024-12-09 18:14:40.354655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.354686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.354703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.413 [2024-12-09 18:14:40.360532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.360572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.360591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.413 [2024-12-09 18:14:40.366307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.366337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.366370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.413 [2024-12-09 18:14:40.372810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.372856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.372873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.413 [2024-12-09 18:14:40.377911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.377942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.377959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.413 [2024-12-09 18:14:40.383412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.383444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.383461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.413 [2024-12-09 18:14:40.388945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.388977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.388994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.413 [2024-12-09 18:14:40.396219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.396250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.396275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.413 [2024-12-09 18:14:40.402530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.402569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.402588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.413 [2024-12-09 18:14:40.408655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.408687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.408704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.413 [2024-12-09 18:14:40.413050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.413081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.413098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.413 [2024-12-09 18:14:40.417576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.417607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.417623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.413 [2024-12-09 18:14:40.422282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.422313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.422330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.413 [2024-12-09 18:14:40.427930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.427962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.427979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.413 [2024-12-09 18:14:40.434243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.434274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.434292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.413 5530.50 IOPS, 691.31 MiB/s [2024-12-09T17:14:40.454Z] [2024-12-09 18:14:40.442199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c029d0) 00:25:17.413 [2024-12-09 18:14:40.442230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.413 [2024-12-09 18:14:40.442247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.413 00:25:17.413 Latency(us) 00:25:17.413 [2024-12-09T17:14:40.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.413 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:17.413 nvme0n1 : 2.00 5526.19 690.77 0.00 0.00 2890.42 594.68 13786.83 00:25:17.413 [2024-12-09T17:14:40.454Z] =================================================================================================================== 00:25:17.413 [2024-12-09T17:14:40.454Z] Total : 5526.19 690.77 0.00 0.00 2890.42 594.68 13786.83 00:25:17.413 { 00:25:17.413 "results": [ 00:25:17.413 { 00:25:17.413 "job": "nvme0n1", 00:25:17.413 "core_mask": "0x2", 00:25:17.414 "workload": "randread", 00:25:17.414 "status": "finished", 00:25:17.414 "queue_depth": 16, 00:25:17.414 "io_size": 131072, 00:25:17.414 "runtime": 2.004454, 00:25:17.414 "iops": 5526.193167815275, 00:25:17.414 "mibps": 690.7741459769094, 00:25:17.414 "io_failed": 0, 00:25:17.414 "io_timeout": 0, 00:25:17.414 "avg_latency_us": 2890.422567950274, 00:25:17.414 "min_latency_us": 594.6785185185186, 00:25:17.414 "max_latency_us": 13786.832592592593 00:25:17.414 } 00:25:17.414 ], 00:25:17.414 "core_count": 1 00:25:17.414 } 00:25:17.672 18:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:17.672 18:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:17.672 18:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:17.672 | .driver_specific 00:25:17.672 | .nvme_error 00:25:17.672 | .status_code 00:25:17.672 | .command_transient_transport_error' 00:25:17.672 18:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:17.933 18:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 358 > 0 )) 00:25:17.933 18:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1574758 00:25:17.933 18:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1574758 ']' 00:25:17.933 18:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1574758 00:25:17.933 18:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:17.933 18:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:17.933 18:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1574758 00:25:17.933 18:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:17.933 18:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:17.933 18:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1574758' 00:25:17.933 killing process with pid 1574758 00:25:17.933 18:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1574758 00:25:17.933 Received shutdown signal, test time was about 2.000000 seconds 00:25:17.933 00:25:17.933 Latency(us) 00:25:17.933 [2024-12-09T17:14:40.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.933 [2024-12-09T17:14:40.974Z] =================================================================================================================== 00:25:17.933 [2024-12-09T17:14:40.974Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:17.933 18:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1574758 00:25:18.192 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:18.192 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:18.192 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:18.192 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:18.192 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:18.192 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1575168 00:25:18.192 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:18.192 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1575168 /var/tmp/bperf.sock 00:25:18.192 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1575168 ']' 00:25:18.192 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:18.192 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:18.192 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:18.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:18.192 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:18.192 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:18.192 [2024-12-09 18:14:41.067237] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:25:18.192 [2024-12-09 18:14:41.067316] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1575168 ] 00:25:18.192 [2024-12-09 18:14:41.132505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.192 [2024-12-09 18:14:41.186558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.449 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.449 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:18.449 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:18.449 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:18.707 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:18.707 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.707 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:18.707 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.707 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:18.707 18:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:19.272 nvme0n1 00:25:19.272 18:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:19.272 18:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.272 18:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:19.272 18:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.272 18:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:19.272 18:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:19.272 Running I/O for 2 seconds... 00:25:19.272 [2024-12-09 18:14:42.172142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016ef3a28 00:25:19.272 [2024-12-09 18:14:42.173388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.272 [2024-12-09 18:14:42.173442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:19.272 [2024-12-09 18:14:42.184715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016ef7100 00:25:19.273 [2024-12-09 18:14:42.186037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.273 [2024-12-09 18:14:42.186081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:19.273 [2024-12-09 18:14:42.199119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016efb048 00:25:19.273 [2024-12-09 18:14:42.201049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.273 [2024-12-09 18:14:42.201094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:19.273 [2024-12-09 18:14:42.207504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eeaab8 00:25:19.273 [2024-12-09 18:14:42.208592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.273 [2024-12-09 18:14:42.208622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:19.273 [2024-12-09 18:14:42.219375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016ef31b8 00:25:19.273 [2024-12-09 18:14:42.219996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.273 [2024-12-09 18:14:42.220040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:19.273 [2024-12-09 18:14:42.234003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eefae0 00:25:19.273 [2024-12-09 18:14:42.235872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.273 [2024-12-09 18:14:42.235899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:19.273 [2024-12-09 18:14:42.242391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eee190 00:25:19.273 [2024-12-09 18:14:42.243375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.273 [2024-12-09 18:14:42.243417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:19.273 [2024-12-09 18:14:42.257085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.273 [2024-12-09 18:14:42.257349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.273 [2024-12-09 18:14:42.257379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.273 [2024-12-09 18:14:42.271294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.273 [2024-12-09 18:14:42.271561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.273 [2024-12-09 18:14:42.271589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.273 [2024-12-09 18:14:42.285032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.273 [2024-12-09 18:14:42.285276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.273 [2024-12-09 18:14:42.285319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.273 [2024-12-09 18:14:42.298894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.273 [2024-12-09 18:14:42.299119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.273 [2024-12-09 18:14:42.299146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.533 [2024-12-09 18:14:42.312856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.533 [2024-12-09 18:14:42.313060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.533 [2024-12-09 18:14:42.313087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.533 [2024-12-09 18:14:42.326831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.533 [2024-12-09 18:14:42.327078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.533 [2024-12-09 18:14:42.327122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.533 [2024-12-09 18:14:42.340584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.533 [2024-12-09 18:14:42.340804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.533 [2024-12-09 18:14:42.340847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.533 [2024-12-09 18:14:42.354431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.533 [2024-12-09 18:14:42.354677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.533 [2024-12-09 18:14:42.354706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.533 [2024-12-09 18:14:42.368141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.533 [2024-12-09 18:14:42.368360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.533 [2024-12-09 18:14:42.368387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.533 [2024-12-09 18:14:42.381996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.533 [2024-12-09 18:14:42.382248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.533 [2024-12-09 18:14:42.382276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.533 [2024-12-09 18:14:42.395775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.533 [2024-12-09 18:14:42.396054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.533 [2024-12-09 18:14:42.396081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.533 [2024-12-09 18:14:42.409446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.533 [2024-12-09 18:14:42.409687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.533 [2024-12-09 18:14:42.409715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.533 [2024-12-09 18:14:42.423091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.533 [2024-12-09 18:14:42.423380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.533 [2024-12-09 18:14:42.423424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.533 [2024-12-09 18:14:42.436744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.533 [2024-12-09 18:14:42.436984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.533 [2024-12-09 18:14:42.437027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.533 [2024-12-09 18:14:42.450393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.533 [2024-12-09 18:14:42.450634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.533 [2024-12-09 18:14:42.450662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.533 [2024-12-09 18:14:42.464226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.533 [2024-12-09 18:14:42.464456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.533 [2024-12-09 18:14:42.464483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.533 [2024-12-09 18:14:42.477846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.533 [2024-12-09 18:14:42.478067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.533 [2024-12-09 18:14:42.478094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.533 [2024-12-09 18:14:42.491417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.533 [2024-12-09 18:14:42.491667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.533 [2024-12-09 18:14:42.491696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.533 [2024-12-09 18:14:42.505163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.533 [2024-12-09 18:14:42.505472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.533 [2024-12-09 18:14:42.505505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.533 [2024-12-09 18:14:42.518985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.533 [2024-12-09 18:14:42.519252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.533 [2024-12-09 18:14:42.519280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.533 [2024-12-09 18:14:42.532791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.533 [2024-12-09 18:14:42.532985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.533 [2024-12-09 18:14:42.533012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.533 [2024-12-09 18:14:42.546582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.533 [2024-12-09 18:14:42.546805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.533 [2024-12-09 18:14:42.546832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.533 [2024-12-09 18:14:42.560432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.533 [2024-12-09 18:14:42.560646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.533 [2024-12-09 18:14:42.560674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.792 [2024-12-09 18:14:42.574300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.793 [2024-12-09 18:14:42.574512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.793 [2024-12-09 18:14:42.574566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.793 [2024-12-09 18:14:42.588177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.793 [2024-12-09 18:14:42.588470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.793 [2024-12-09 18:14:42.588498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.793 [2024-12-09 18:14:42.601875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.793 [2024-12-09 18:14:42.602129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.793 [2024-12-09 18:14:42.602156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.793 [2024-12-09 18:14:42.615751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.793 [2024-12-09 18:14:42.616051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.793 [2024-12-09 18:14:42.616080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.793 [2024-12-09 18:14:42.629583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.793 [2024-12-09 18:14:42.629776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.793 [2024-12-09 18:14:42.629804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.793 [2024-12-09 18:14:42.643312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.793 [2024-12-09 18:14:42.643612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.793 [2024-12-09 18:14:42.643640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.793 [2024-12-09 18:14:42.656946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.793 [2024-12-09 18:14:42.657174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.793 [2024-12-09 18:14:42.657201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.793 [2024-12-09 18:14:42.670783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.793 [2024-12-09 18:14:42.671055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.793 [2024-12-09 18:14:42.671083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.793 [2024-12-09 18:14:42.684700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.793 [2024-12-09 18:14:42.684894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.793 [2024-12-09 18:14:42.684922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.793 [2024-12-09 18:14:42.698164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.793 [2024-12-09 18:14:42.698383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.793 [2024-12-09 18:14:42.698411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.793 [2024-12-09 18:14:42.711873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.793 [2024-12-09 18:14:42.712155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.793 [2024-12-09 18:14:42.712182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.793 [2024-12-09 18:14:42.725860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.793 [2024-12-09 18:14:42.726099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.793 [2024-12-09 18:14:42.726127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.793 [2024-12-09 18:14:42.739449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.793 [2024-12-09 18:14:42.739647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.793 [2024-12-09 18:14:42.739676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.793 [2024-12-09 18:14:42.753009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.793 [2024-12-09 18:14:42.753257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.793 [2024-12-09 18:14:42.753299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.793 [2024-12-09 18:14:42.766774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.793 [2024-12-09 18:14:42.767073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.793 [2024-12-09 18:14:42.767101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.793 [2024-12-09 18:14:42.780464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.793 [2024-12-09 18:14:42.780757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.793 [2024-12-09 18:14:42.780785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.793 [2024-12-09 18:14:42.794200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.793 [2024-12-09 18:14:42.794435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.793 [2024-12-09 18:14:42.794462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.793 [2024-12-09 18:14:42.807981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.793 [2024-12-09 18:14:42.808231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.793 [2024-12-09 18:14:42.808274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.793 [2024-12-09 18:14:42.821682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:19.793 [2024-12-09 18:14:42.821882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.793 [2024-12-09 18:14:42.821909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.053 [2024-12-09 18:14:42.835476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.053 [2024-12-09 18:14:42.835714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.053 [2024-12-09 18:14:42.835742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.053 [2024-12-09 18:14:42.849330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.053 [2024-12-09 18:14:42.849620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.053 [2024-12-09 18:14:42.849648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.053 [2024-12-09 18:14:42.862931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.053 [2024-12-09 18:14:42.863158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.053 [2024-12-09 18:14:42.863191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.053 [2024-12-09 18:14:42.876770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.053 [2024-12-09 18:14:42.877035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.053 [2024-12-09 18:14:42.877078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.053 [2024-12-09 18:14:42.890662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.053 [2024-12-09 18:14:42.890849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.053 [2024-12-09 18:14:42.890892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.053 [2024-12-09 18:14:42.904483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.053 [2024-12-09 18:14:42.904704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.053 [2024-12-09 18:14:42.904733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.053 [2024-12-09 18:14:42.918393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.053 [2024-12-09 18:14:42.918654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.053 [2024-12-09 18:14:42.918683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.053 [2024-12-09 18:14:42.932318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.053 [2024-12-09 18:14:42.932607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.053 [2024-12-09 18:14:42.932636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.053 [2024-12-09 18:14:42.946072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.053 [2024-12-09 18:14:42.946260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.053 [2024-12-09 18:14:42.946288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.053 [2024-12-09 18:14:42.959378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.053 [2024-12-09 18:14:42.959603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.053 [2024-12-09 18:14:42.959633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.053 [2024-12-09 18:14:42.973061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.053 [2024-12-09 18:14:42.973290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.053 [2024-12-09 18:14:42.973334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.053 [2024-12-09 18:14:42.986627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.053 [2024-12-09 18:14:42.986825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.053 [2024-12-09 18:14:42.986868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.053 [2024-12-09 18:14:43.000383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.053 [2024-12-09 18:14:43.000631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.053 [2024-12-09 18:14:43.000660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.053 [2024-12-09 18:14:43.014056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.053 [2024-12-09 18:14:43.014345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.053 [2024-12-09 18:14:43.014372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.053 [2024-12-09 18:14:43.027817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.053 [2024-12-09 18:14:43.028082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.053 [2024-12-09 18:14:43.028109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.053 [2024-12-09 18:14:43.041527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.053 [2024-12-09 18:14:43.041810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.053 [2024-12-09 18:14:43.041839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.054 [2024-12-09 18:14:43.055223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.054 [2024-12-09 18:14:43.055488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.054 [2024-12-09 18:14:43.055515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.054 [2024-12-09 18:14:43.068868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.054 [2024-12-09 18:14:43.069069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.054 [2024-12-09 18:14:43.069096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.054 [2024-12-09 18:14:43.082498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.054 [2024-12-09 18:14:43.082694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.054 [2024-12-09 18:14:43.082721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.314 [2024-12-09 18:14:43.096424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.314 [2024-12-09 18:14:43.096665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.314 [2024-12-09 18:14:43.096700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.314 [2024-12-09 18:14:43.110257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.314 [2024-12-09 18:14:43.110556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.314 [2024-12-09 18:14:43.110610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.314 [2024-12-09 18:14:43.123964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.314 [2024-12-09 18:14:43.124246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.314 [2024-12-09 18:14:43.124274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.314 [2024-12-09 18:14:43.137784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.314 [2024-12-09 18:14:43.138013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.314 [2024-12-09 18:14:43.138056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.314 [2024-12-09 18:14:43.151587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.314 [2024-12-09 18:14:43.151777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.314 [2024-12-09 18:14:43.151805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.314 18678.00 IOPS, 72.96 MiB/s [2024-12-09T17:14:43.355Z] [2024-12-09 18:14:43.165637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.314 [2024-12-09 18:14:43.165862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.314 [2024-12-09 18:14:43.165892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.315 [2024-12-09 18:14:43.179417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.315 [2024-12-09 18:14:43.179630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.315 [2024-12-09 18:14:43.179658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.315 [2024-12-09 18:14:43.192969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.315 [2024-12-09 18:14:43.193255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.315 [2024-12-09 18:14:43.193299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.315 [2024-12-09 18:14:43.206698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.315 [2024-12-09 18:14:43.206954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.315 [2024-12-09 18:14:43.206982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.315 [2024-12-09 18:14:43.220411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.315 [2024-12-09 18:14:43.220605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.315 [2024-12-09 18:14:43.220642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.315 [2024-12-09 18:14:43.234246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.315 [2024-12-09 18:14:43.234567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.315 [2024-12-09 18:14:43.234597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.315 [2024-12-09 18:14:43.247988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.315 [2024-12-09 18:14:43.248226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.315 [2024-12-09 18:14:43.248268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.315 [2024-12-09 18:14:43.261841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.315 [2024-12-09 18:14:43.262068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.315 [2024-12-09 18:14:43.262110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.315 [2024-12-09 18:14:43.275919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.315 [2024-12-09 18:14:43.276203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.315 [2024-12-09 18:14:43.276230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.315 [2024-12-09 18:14:43.289504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.315 [2024-12-09 18:14:43.289703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.315 [2024-12-09 18:14:43.289731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.315 [2024-12-09 18:14:43.303096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.315 [2024-12-09 18:14:43.303282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.315 [2024-12-09 18:14:43.303325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.315 [2024-12-09 18:14:43.316749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.315 [2024-12-09 18:14:43.317021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.315 [2024-12-09 18:14:43.317063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.315 [2024-12-09 18:14:43.330481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.315 [2024-12-09 18:14:43.330713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.315 [2024-12-09 18:14:43.330742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.315 [2024-12-09 18:14:43.344200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.315 [2024-12-09 18:14:43.344440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.315 [2024-12-09 18:14:43.344466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.576 [2024-12-09 18:14:43.357982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.576 [2024-12-09 18:14:43.358260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.576 [2024-12-09 18:14:43.358302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.576 [2024-12-09 18:14:43.371749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.576 [2024-12-09 18:14:43.372043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.576 [2024-12-09 18:14:43.372085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.576 [2024-12-09 18:14:43.385592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.576 [2024-12-09 18:14:43.385779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.576 [2024-12-09 18:14:43.385806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.576 [2024-12-09 18:14:43.399274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.576 [2024-12-09 18:14:43.399488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.576 [2024-12-09 18:14:43.399515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.576 [2024-12-09 18:14:43.412915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.576 [2024-12-09 18:14:43.413172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.576 [2024-12-09 18:14:43.413198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.576 [2024-12-09 18:14:43.426374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.576 [2024-12-09 18:14:43.426635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.576 [2024-12-09 18:14:43.426663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.576 [2024-12-09 18:14:43.440118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.576 [2024-12-09 18:14:43.440362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.576 [2024-12-09 18:14:43.440389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.576 [2024-12-09 18:14:43.453671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.576 [2024-12-09 18:14:43.453861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.576 [2024-12-09 18:14:43.453888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.576 [2024-12-09 18:14:43.467424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.576 [2024-12-09 18:14:43.467620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.576 [2024-12-09 18:14:43.467648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.576 [2024-12-09 18:14:43.480758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.576 [2024-12-09 18:14:43.481003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.576 [2024-12-09 18:14:43.481031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.576 [2024-12-09 18:14:43.494432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.576 [2024-12-09 18:14:43.494669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.576 [2024-12-09 18:14:43.494698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.576 [2024-12-09 18:14:43.507972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.576 [2024-12-09 18:14:43.508158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.576 [2024-12-09 18:14:43.508185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.576 [2024-12-09 18:14:43.521635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.576 [2024-12-09 18:14:43.521825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.576 [2024-12-09 18:14:43.521854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.576 [2024-12-09 18:14:43.535185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.576 [2024-12-09 18:14:43.535375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.576 [2024-12-09 18:14:43.535403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.576 [2024-12-09 18:14:43.548842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.576 [2024-12-09 18:14:43.549127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.576 [2024-12-09 18:14:43.549155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.576 [2024-12-09 18:14:43.562402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.576 [2024-12-09 18:14:43.562593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.576 [2024-12-09 18:14:43.562622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.576 [2024-12-09 18:14:43.575792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.576 [2024-12-09 18:14:43.576037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.576 [2024-12-09 18:14:43.576065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.576 [2024-12-09 18:14:43.589292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.576 [2024-12-09 18:14:43.589475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.576 [2024-12-09 18:14:43.589503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.576 [2024-12-09 18:14:43.602698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.576 [2024-12-09 18:14:43.602952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.576 [2024-12-09 18:14:43.602980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.837 [2024-12-09 18:14:43.616518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.837 [2024-12-09 18:14:43.616717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.837 [2024-12-09 18:14:43.616745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.837 [2024-12-09 18:14:43.630094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.837 [2024-12-09 18:14:43.630329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.837 [2024-12-09 18:14:43.630357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.837 [2024-12-09 18:14:43.643696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.837 [2024-12-09 18:14:43.643889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.837 [2024-12-09 18:14:43.643916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.837 [2024-12-09 18:14:43.657170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.837 [2024-12-09 18:14:43.657394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.837 [2024-12-09 18:14:43.657421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.837 [2024-12-09 18:14:43.670714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.837 [2024-12-09 18:14:43.670905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.837 [2024-12-09 18:14:43.670941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.837 [2024-12-09 18:14:43.684127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.837 [2024-12-09 18:14:43.684362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.837 [2024-12-09 18:14:43.684389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.837 [2024-12-09 18:14:43.697711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.837 [2024-12-09 18:14:43.697898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.837 [2024-12-09 18:14:43.697933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.837 [2024-12-09 18:14:43.711064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.837 [2024-12-09 18:14:43.711256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.837 [2024-12-09 18:14:43.711284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.837 [2024-12-09 18:14:43.724937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.837 [2024-12-09 18:14:43.725149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.837 [2024-12-09 18:14:43.725178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.837 [2024-12-09 18:14:43.738284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.837 [2024-12-09 18:14:43.738476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.837 [2024-12-09 18:14:43.738505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.837 [2024-12-09 18:14:43.751998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.837 [2024-12-09 18:14:43.752211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.837 [2024-12-09 18:14:43.752240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.837 [2024-12-09 18:14:43.765557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.837 [2024-12-09 18:14:43.765747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.837 [2024-12-09 18:14:43.765774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.837 [2024-12-09 18:14:43.779289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.837 [2024-12-09 18:14:43.779468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.837 [2024-12-09 18:14:43.779497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.837 [2024-12-09 18:14:43.792961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.837 [2024-12-09 18:14:43.793149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.837 [2024-12-09 18:14:43.793177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.837 [2024-12-09 18:14:43.806428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.837 [2024-12-09 18:14:43.806704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.837 [2024-12-09 18:14:43.806733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.837 [2024-12-09 18:14:43.820049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.837 [2024-12-09 18:14:43.820246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.837 [2024-12-09 18:14:43.820275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.837 [2024-12-09 18:14:43.833536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.837 [2024-12-09 18:14:43.833700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.837 [2024-12-09 18:14:43.833728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.837 [2024-12-09 18:14:43.847264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.837 [2024-12-09 18:14:43.847449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.837 [2024-12-09 18:14:43.847477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.837 [2024-12-09 18:14:43.860757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.837 [2024-12-09 18:14:43.860974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.838 [2024-12-09 18:14:43.861002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.838 [2024-12-09 18:14:43.874515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:20.838 [2024-12-09 18:14:43.874717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.838 [2024-12-09 18:14:43.874744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.097 [2024-12-09 18:14:43.888027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:21.097 [2024-12-09 18:14:43.888304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.097 [2024-12-09 18:14:43.888332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.097 [2024-12-09 18:14:43.901504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:21.097 [2024-12-09 18:14:43.901700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.097 [2024-12-09 18:14:43.901727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.097 [2024-12-09 18:14:43.914912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:21.097 [2024-12-09 18:14:43.915169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.097 [2024-12-09 18:14:43.915196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.097 [2024-12-09 18:14:43.928365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:21.097 [2024-12-09 18:14:43.928585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.097 [2024-12-09 18:14:43.928614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.097 [2024-12-09 18:14:43.941906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:21.097 [2024-12-09 18:14:43.942163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.097 [2024-12-09 18:14:43.942191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.097 [2024-12-09 18:14:43.955501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:21.097 [2024-12-09 18:14:43.955699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.097 [2024-12-09 18:14:43.955726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.097 [2024-12-09 18:14:43.969028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:21.097 [2024-12-09 18:14:43.969239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.097 [2024-12-09 18:14:43.969268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.097 [2024-12-09 18:14:43.982553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:21.097 [2024-12-09 18:14:43.982754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.097 [2024-12-09 18:14:43.982782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.097 [2024-12-09 18:14:43.995855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:21.097 [2024-12-09 18:14:43.996046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.097 [2024-12-09 18:14:43.996076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.097 [2024-12-09 18:14:44.009373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:21.097 [2024-12-09 18:14:44.009567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.097 [2024-12-09 18:14:44.009595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.097 [2024-12-09 18:14:44.022971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:21.097 [2024-12-09 18:14:44.023158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.097 [2024-12-09 18:14:44.023185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.097 [2024-12-09 18:14:44.036587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:21.097 [2024-12-09 18:14:44.036785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.097 [2024-12-09 18:14:44.036813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.097 [2024-12-09 18:14:44.050169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:21.097 [2024-12-09 18:14:44.050356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.097 [2024-12-09 18:14:44.050393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.097 [2024-12-09 18:14:44.063844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:21.097 [2024-12-09 18:14:44.064067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.097 [2024-12-09 18:14:44.064095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.097 [2024-12-09 18:14:44.077619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:21.097 [2024-12-09 18:14:44.077808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.097 [2024-12-09 18:14:44.077836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.097 [2024-12-09 18:14:44.091172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:21.097 [2024-12-09 18:14:44.091435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.097 [2024-12-09 18:14:44.091464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.097 [2024-12-09 18:14:44.104744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:21.097 [2024-12-09 18:14:44.104936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.097 [2024-12-09 18:14:44.104964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.097 [2024-12-09 18:14:44.118266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:21.097 [2024-12-09 18:14:44.118528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.097 [2024-12-09 18:14:44.118566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.097 [2024-12-09 18:14:44.132072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:21.097 [2024-12-09 18:14:44.132261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.097 [2024-12-09 18:14:44.132289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.356 [2024-12-09 18:14:44.145705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:21.356 [2024-12-09 18:14:44.145901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.356 [2024-12-09 18:14:44.145928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.356 [2024-12-09 18:14:44.159199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce8e30) with pdu=0x200016eebb98 00:25:21.356 [2024-12-09 18:14:44.160429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.356 [2024-12-09 18:14:44.160458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:21.356 18726.50 IOPS, 73.15 MiB/s 00:25:21.356 Latency(us) 00:25:21.356 [2024-12-09T17:14:44.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.356 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:21.356 nvme0n1 : 2.01 18731.32 73.17 0.00 0.00 6817.87 2730.67 14563.56 00:25:21.356 [2024-12-09T17:14:44.397Z] =================================================================================================================== 00:25:21.356 [2024-12-09T17:14:44.397Z] Total : 18731.32 73.17 0.00 0.00 6817.87 2730.67 14563.56 00:25:21.356 { 00:25:21.356 "results": [ 00:25:21.356 { 00:25:21.356 "job": "nvme0n1", 00:25:21.356 "core_mask": "0x2", 00:25:21.356 "workload": "randwrite", 00:25:21.356 "status": "finished", 00:25:21.356 "queue_depth": 128, 00:25:21.356 "io_size": 4096, 00:25:21.356 "runtime": 2.007974, 00:25:21.356 "iops": 18731.318234200244, 00:25:21.356 "mibps": 73.1692118523447, 00:25:21.356 "io_failed": 0, 00:25:21.356 "io_timeout": 0, 00:25:21.356 "avg_latency_us": 6817.86581902545, 00:25:21.356 "min_latency_us": 2730.6666666666665, 00:25:21.356 "max_latency_us": 14563.555555555555 00:25:21.356 } 00:25:21.356 ], 00:25:21.356 "core_count": 1 00:25:21.356 } 00:25:21.356 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:21.356 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:21.356 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:21.356 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:21.356 | .driver_specific 00:25:21.356 | .nvme_error 00:25:21.356 | .status_code 00:25:21.356 | .command_transient_transport_error' 00:25:21.615 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 147 > 0 )) 00:25:21.615 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1575168 00:25:21.615 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1575168 ']' 00:25:21.615 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1575168 00:25:21.615 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:21.615 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:21.615 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1575168 00:25:21.615 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:21.615 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:21.615 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1575168' 00:25:21.615 killing process with pid 1575168 00:25:21.616 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1575168 00:25:21.616 Received shutdown signal, test time was about 2.000000 seconds 00:25:21.616 00:25:21.616 Latency(us) 00:25:21.616 [2024-12-09T17:14:44.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.616 [2024-12-09T17:14:44.657Z] =================================================================================================================== 00:25:21.616 [2024-12-09T17:14:44.657Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:21.616 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1575168 00:25:21.874 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:21.874 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:21.874 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:21.874 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:21.874 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:21.874 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1575574 00:25:21.874 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:21.874 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1575574 /var/tmp/bperf.sock 00:25:21.874 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1575574 ']' 00:25:21.874 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:21.874 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:21.874 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:21.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:21.874 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:21.874 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:21.874 [2024-12-09 18:14:44.769379] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:25:21.874 [2024-12-09 18:14:44.769460] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1575574 ] 00:25:21.874 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:21.874 Zero copy mechanism will not be used. 00:25:21.874 [2024-12-09 18:14:44.834879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.874 [2024-12-09 18:14:44.888581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.133 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:22.133 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:22.133 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:22.133 18:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:22.390 18:14:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:22.390 18:14:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.390 18:14:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:22.390 18:14:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.390 18:14:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:22.390 18:14:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:22.956 nvme0n1 00:25:22.956 18:14:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:22.956 18:14:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.956 18:14:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:22.956 18:14:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.956 18:14:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:22.956 18:14:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:22.956 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:22.956 Zero copy mechanism will not be used. 00:25:22.956 Running I/O for 2 seconds... 00:25:22.956 [2024-12-09 18:14:45.913065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:22.956 [2024-12-09 18:14:45.913172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.956 [2024-12-09 18:14:45.913210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.956 [2024-12-09 18:14:45.918983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:22.956 [2024-12-09 18:14:45.919141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.956 [2024-12-09 18:14:45.919173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.956 [2024-12-09 18:14:45.925616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:22.956 [2024-12-09 18:14:45.925764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.956 [2024-12-09 18:14:45.925795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.956 [2024-12-09 18:14:45.931945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:22.956 [2024-12-09 18:14:45.932106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.956 [2024-12-09 18:14:45.932136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.957 [2024-12-09 18:14:45.938375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:22.957 [2024-12-09 18:14:45.938538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.957 [2024-12-09 18:14:45.938576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.957 [2024-12-09 18:14:45.944722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:22.957 [2024-12-09 18:14:45.944902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.957 [2024-12-09 18:14:45.944932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.957 [2024-12-09 18:14:45.951168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:22.957 [2024-12-09 18:14:45.951357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.957 [2024-12-09 18:14:45.951386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.957 [2024-12-09 18:14:45.957937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:22.957 [2024-12-09 18:14:45.958061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.957 [2024-12-09 18:14:45.958090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.957 [2024-12-09 18:14:45.964803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:22.957 [2024-12-09 18:14:45.964907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.957 [2024-12-09 18:14:45.964936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.957 [2024-12-09 18:14:45.971052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:22.957 [2024-12-09 18:14:45.971147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.957 [2024-12-09 18:14:45.971179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.957 [2024-12-09 18:14:45.976369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:22.957 [2024-12-09 18:14:45.976480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.957 [2024-12-09 18:14:45.976509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.957 [2024-12-09 18:14:45.981467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:22.957 [2024-12-09 18:14:45.981543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.957 [2024-12-09 18:14:45.981578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.957 [2024-12-09 18:14:45.986697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:22.957 [2024-12-09 18:14:45.986781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.957 [2024-12-09 18:14:45.986808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.957 [2024-12-09 18:14:45.993050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:22.957 [2024-12-09 18:14:45.993232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.957 [2024-12-09 18:14:45.993268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.217 [2024-12-09 18:14:45.999515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.217 [2024-12-09 18:14:45.999670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.217 [2024-12-09 18:14:45.999699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.217 [2024-12-09 18:14:46.006439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.217 [2024-12-09 18:14:46.006561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.217 [2024-12-09 18:14:46.006590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.217 [2024-12-09 18:14:46.013261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.217 [2024-12-09 18:14:46.013398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.217 [2024-12-09 18:14:46.013432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.217 [2024-12-09 18:14:46.019605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.217 [2024-12-09 18:14:46.019764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.217 [2024-12-09 18:14:46.019793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.217 [2024-12-09 18:14:46.025964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.217 [2024-12-09 18:14:46.026147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.217 [2024-12-09 18:14:46.026176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.217 [2024-12-09 18:14:46.032213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.217 [2024-12-09 18:14:46.032297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.217 [2024-12-09 18:14:46.032324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.217 [2024-12-09 18:14:46.038572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.217 [2024-12-09 18:14:46.038855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.217 [2024-12-09 18:14:46.038883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.217 [2024-12-09 18:14:46.045664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.217 [2024-12-09 18:14:46.046024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.217 [2024-12-09 18:14:46.046052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.217 [2024-12-09 18:14:46.052869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.217 [2024-12-09 18:14:46.052974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.217 [2024-12-09 18:14:46.053001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.217 [2024-12-09 18:14:46.059998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.217 [2024-12-09 18:14:46.060116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.217 [2024-12-09 18:14:46.060145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.217 [2024-12-09 18:14:46.067012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.217 [2024-12-09 18:14:46.067181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.217 [2024-12-09 18:14:46.067209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.217 [2024-12-09 18:14:46.074218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.217 [2024-12-09 18:14:46.074310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.217 [2024-12-09 18:14:46.074337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.217 [2024-12-09 18:14:46.080146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.217 [2024-12-09 18:14:46.080226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.217 [2024-12-09 18:14:46.080253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.217 [2024-12-09 18:14:46.085357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.217 [2024-12-09 18:14:46.085430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.217 [2024-12-09 18:14:46.085457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.217 [2024-12-09 18:14:46.090309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.217 [2024-12-09 18:14:46.090387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.217 [2024-12-09 18:14:46.090414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.217 [2024-12-09 18:14:46.095254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.217 [2024-12-09 18:14:46.095320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.217 [2024-12-09 18:14:46.095347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.217 [2024-12-09 18:14:46.100254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.217 [2024-12-09 18:14:46.100329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.217 [2024-12-09 18:14:46.100356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.217 [2024-12-09 18:14:46.105053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.217 [2024-12-09 18:14:46.105130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.217 [2024-12-09 18:14:46.105157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.217 [2024-12-09 18:14:46.110090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.110160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.110186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.115333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.115406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.115433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.120841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.120912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.120939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.125793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.125878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.125905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.131346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.131430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.131457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.136490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.136591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.136617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.141418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.141517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.141552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.146512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.146610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.146636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.151527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.151626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.151653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.156447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.156529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.156564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.161286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.161370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.161402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.166537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.166685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.166714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.171568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.171644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.171673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.176371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.176463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.176491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.181215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.181293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.181320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.186924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.187003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.187030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.192681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.192760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.192788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.197773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.197860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.197887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.202753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.202846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.202872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.207831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.207918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.207946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.212767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.212843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.212869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.217617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.217711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.217743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.222446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.222532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.222565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.227385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.227453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.227479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.232249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.232332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.232358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.237296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.237371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.237397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.242138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.242211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.242238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.218 [2024-12-09 18:14:46.247174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.218 [2024-12-09 18:14:46.247243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.218 [2024-12-09 18:14:46.247270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.219 [2024-12-09 18:14:46.252255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.219 [2024-12-09 18:14:46.252340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.219 [2024-12-09 18:14:46.252366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.257228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.257318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.257345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.262134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.262204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.262231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.267043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.267115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.267142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.271929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.271998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.272025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.276684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.276755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.276782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.281444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.281527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.281561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.286464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.286561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.286589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.291588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.291660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.291693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.296676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.296749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.296776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.301627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.301735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.301763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.306619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.306688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.306715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.311814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.311898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.311925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.316682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.316766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.316794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.321642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.321734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.321760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.326503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.326610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.326640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.331278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.331351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.331377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.336356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.336451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.336478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.341912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.341988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.342015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.346862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.346942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.346969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.352003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.352098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.352125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.357500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.357630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.357659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.363781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.363958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.363986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.370130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.370301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.370330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.376427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.376589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.376618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.382256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.382390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.382418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.387491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.387603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.387631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.480 [2024-12-09 18:14:46.393973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.480 [2024-12-09 18:14:46.394152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.480 [2024-12-09 18:14:46.394180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.481 [2024-12-09 18:14:46.400151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.481 [2024-12-09 18:14:46.400222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.481 [2024-12-09 18:14:46.400249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.481 [2024-12-09 18:14:46.407253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.481 [2024-12-09 18:14:46.407358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.481 [2024-12-09 18:14:46.407386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.481 [2024-12-09 18:14:46.413895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.481 [2024-12-09 18:14:46.413995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.481 [2024-12-09 18:14:46.414026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.481 [2024-12-09 18:14:46.419934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.481 [2024-12-09 18:14:46.420011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.481 [2024-12-09 18:14:46.420037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.481 [2024-12-09 18:14:46.426593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.481 [2024-12-09 18:14:46.426671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.481 [2024-12-09 18:14:46.426700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.481 [2024-12-09 18:14:46.432375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.481 [2024-12-09 18:14:46.432444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.481 [2024-12-09 18:14:46.432471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.481 [2024-12-09 18:14:46.437829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.481 [2024-12-09 18:14:46.437900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.481 [2024-12-09 18:14:46.437935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.481 [2024-12-09 18:14:46.443391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.481 [2024-12-09 18:14:46.443462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.481 [2024-12-09 18:14:46.443489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.481 [2024-12-09 18:14:46.449152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.481 [2024-12-09 18:14:46.449226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.481 [2024-12-09 18:14:46.449254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.481 [2024-12-09 18:14:46.454669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.481 [2024-12-09 18:14:46.454753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.481 [2024-12-09 18:14:46.454781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.481 [2024-12-09 18:14:46.460354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.481 [2024-12-09 18:14:46.460447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.481 [2024-12-09 18:14:46.460474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.481 [2024-12-09 18:14:46.466245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.481 [2024-12-09 18:14:46.466316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.481 [2024-12-09 18:14:46.466344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.481 [2024-12-09 18:14:46.471720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.481 [2024-12-09 18:14:46.471806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.481 [2024-12-09 18:14:46.471833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.481 [2024-12-09 18:14:46.477243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.481 [2024-12-09 18:14:46.477313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.481 [2024-12-09 18:14:46.477340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.481 [2024-12-09 18:14:46.482741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.481 [2024-12-09 18:14:46.482812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.481 [2024-12-09 18:14:46.482840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.481 [2024-12-09 18:14:46.487937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.481 [2024-12-09 18:14:46.488014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.481 [2024-12-09 18:14:46.488042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.481 [2024-12-09 18:14:46.493397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.481 [2024-12-09 18:14:46.493474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.481 [2024-12-09 18:14:46.493501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.481 [2024-12-09 18:14:46.498765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.481 [2024-12-09 18:14:46.498836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.481 [2024-12-09 18:14:46.498863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.481 [2024-12-09 18:14:46.505051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.481 [2024-12-09 18:14:46.505122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.481 [2024-12-09 18:14:46.505149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.481 [2024-12-09 18:14:46.509886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.481 [2024-12-09 18:14:46.509967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.481 [2024-12-09 18:14:46.509993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.481 [2024-12-09 18:14:46.514915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.481 [2024-12-09 18:14:46.515001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.481 [2024-12-09 18:14:46.515027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.743 [2024-12-09 18:14:46.519893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.743 [2024-12-09 18:14:46.519966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-12-09 18:14:46.519993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.743 [2024-12-09 18:14:46.525398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.743 [2024-12-09 18:14:46.525489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-12-09 18:14:46.525516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.743 [2024-12-09 18:14:46.530498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.743 [2024-12-09 18:14:46.530612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-12-09 18:14:46.530641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.743 [2024-12-09 18:14:46.536246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.743 [2024-12-09 18:14:46.536428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-12-09 18:14:46.536456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.743 [2024-12-09 18:14:46.542602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.743 [2024-12-09 18:14:46.542688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-12-09 18:14:46.542715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.743 [2024-12-09 18:14:46.549239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.743 [2024-12-09 18:14:46.549351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-12-09 18:14:46.549379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.743 [2024-12-09 18:14:46.556602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.743 [2024-12-09 18:14:46.556696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-12-09 18:14:46.556727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.743 [2024-12-09 18:14:46.563107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.743 [2024-12-09 18:14:46.563290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-12-09 18:14:46.563319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.743 [2024-12-09 18:14:46.569796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.743 [2024-12-09 18:14:46.569945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-12-09 18:14:46.569973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.743 [2024-12-09 18:14:46.576080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.743 [2024-12-09 18:14:46.576226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-12-09 18:14:46.576254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.743 [2024-12-09 18:14:46.582516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.743 [2024-12-09 18:14:46.582671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-12-09 18:14:46.582701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.743 [2024-12-09 18:14:46.589452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.743 [2024-12-09 18:14:46.589566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-12-09 18:14:46.589602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.743 [2024-12-09 18:14:46.596531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.743 [2024-12-09 18:14:46.596734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-12-09 18:14:46.596763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.743 [2024-12-09 18:14:46.602839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.743 [2024-12-09 18:14:46.602978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-12-09 18:14:46.603006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.743 [2024-12-09 18:14:46.609290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.743 [2024-12-09 18:14:46.609489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.609517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.616215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.616322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.616351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.623848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.623961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.623990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.631223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.631343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.631372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.638611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.638741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.638770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.645891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.645968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.645995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.652226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.652342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.652370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.659680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.659876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.659905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.667030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.667148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.667176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.673804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.673978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.674008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.679323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.679423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.679452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.684129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.684221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.684248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.688978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.689100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.689128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.694400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.694480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.694507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.700000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.700091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.700117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.706335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.706664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.706694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.712766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.713108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.713137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.719558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.719823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.719852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.726248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.726532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.726571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.732168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.732473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.732502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.737799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.738091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.738121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.742494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.742779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.742808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.747283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.747591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.747620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.752069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.752327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.752361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.756519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.756801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.756830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.760913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.761122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.761151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.765623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.765946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.765974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.770693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.770951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-12-09 18:14:46.770980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.744 [2024-12-09 18:14:46.776303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:23.744 [2024-12-09 18:14:46.776586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.745 [2024-12-09 18:14:46.776615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.781535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.781754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.781782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.785819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.786010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.786038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.790298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.790509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.790538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.794743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.794945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.794974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.799101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.799302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.799330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.803371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.803580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.803608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.807704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.807940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.807969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.812903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.813164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.813192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.818234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.818515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.818550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.824205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.824520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.824556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.828578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.828785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.828813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.832841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.833056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.833083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.837298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.837510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.837538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.841833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.842029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.842058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.846165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.846375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.846403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.850699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.850937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.850965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.855194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.855412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.855439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.859540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.859759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.859788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.864006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.864226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.864254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.868216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.868415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.868444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.872285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.872492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.872526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.876982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.877304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.877332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.882182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.882505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.882534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.887313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.887536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.887572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.892662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.892866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.892894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.006 [2024-12-09 18:14:46.898369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.006 [2024-12-09 18:14:46.898620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.006 [2024-12-09 18:14:46.898649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:46.902876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:46.903053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:46.903080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.007 5543.00 IOPS, 692.88 MiB/s [2024-12-09T17:14:47.048Z] [2024-12-09 18:14:46.908398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:46.908577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:46.908606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:46.912695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:46.912911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:46.912940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:46.916783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:46.916994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:46.917020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:46.920958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:46.921178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:46.921207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:46.925042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:46.925236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:46.925264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:46.929164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:46.929390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:46.929424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:46.933826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:46.934016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:46.934045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:46.938455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:46.938652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:46.938681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:46.943017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:46.943196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:46.943224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:46.947518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:46.947692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:46.947720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:46.952106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:46.952289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:46.952317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:46.956694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:46.956886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:46.956914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:46.961161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:46.961341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:46.961369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:46.965595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:46.965780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:46.965809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:46.969894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:46.970086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:46.970115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:46.974307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:46.974499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:46.974527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:46.978605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:46.978794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:46.978822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:46.983146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:46.983318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:46.983346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:46.987680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:46.987866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:46.987893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:46.992149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:46.992330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:46.992368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:46.996721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:46.996906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:46.996934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:47.001240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:47.001426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:47.001453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:47.005782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:47.005966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:47.005993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:47.010217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:47.010392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:47.010420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:47.015128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:47.015300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.007 [2024-12-09 18:14:47.015328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.007 [2024-12-09 18:14:47.021056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.007 [2024-12-09 18:14:47.021301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.008 [2024-12-09 18:14:47.021330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.008 [2024-12-09 18:14:47.026234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.008 [2024-12-09 18:14:47.026513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.008 [2024-12-09 18:14:47.026541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.008 [2024-12-09 18:14:47.031198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.008 [2024-12-09 18:14:47.031486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.008 [2024-12-09 18:14:47.031515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.008 [2024-12-09 18:14:47.035712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.008 [2024-12-09 18:14:47.035930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.008 [2024-12-09 18:14:47.035958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.008 [2024-12-09 18:14:47.040533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.008 [2024-12-09 18:14:47.040667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.008 [2024-12-09 18:14:47.040694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.267 [2024-12-09 18:14:47.046172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.267 [2024-12-09 18:14:47.046452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.267 [2024-12-09 18:14:47.046480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.267 [2024-12-09 18:14:47.051921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.267 [2024-12-09 18:14:47.052205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.267 [2024-12-09 18:14:47.052233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.267 [2024-12-09 18:14:47.056399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.267 [2024-12-09 18:14:47.056608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.267 [2024-12-09 18:14:47.056636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.267 [2024-12-09 18:14:47.060623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.267 [2024-12-09 18:14:47.060813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.267 [2024-12-09 18:14:47.060840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.267 [2024-12-09 18:14:47.065298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.267 [2024-12-09 18:14:47.065489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.267 [2024-12-09 18:14:47.065517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.267 [2024-12-09 18:14:47.070206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.267 [2024-12-09 18:14:47.070375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.070404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.074312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.074500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.074527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.078410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.078599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.078627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.082511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.082698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.082726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.086860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.087020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.087048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.091595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.091804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.091832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.096920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.097169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.097196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.102890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.103145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.103173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.108527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.108779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.108807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.113041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.113247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.113276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.117210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.117393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.117427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.121476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.121711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.121739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.126285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.126607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.126636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.131512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.131805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.131833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.136599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.136882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.136909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.142398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.142621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.142650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.147733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.147962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.147989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.152286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.152476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.152504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.156411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.156580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.156608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.160513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.160704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.160732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.164659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.164845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.164873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.168846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.169032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.169059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.172957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.173114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.173141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.177881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.178163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.178193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.183077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.183299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.183329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.188753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.188989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.189018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.194298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.194604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.194633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.199410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.268 [2024-12-09 18:14:47.199677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.268 [2024-12-09 18:14:47.199707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.268 [2024-12-09 18:14:47.204448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.269 [2024-12-09 18:14:47.204757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.269 [2024-12-09 18:14:47.204786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.269 [2024-12-09 18:14:47.209593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.269 [2024-12-09 18:14:47.209825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.269 [2024-12-09 18:14:47.209853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.269 [2024-12-09 18:14:47.215065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.269 [2024-12-09 18:14:47.215318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.269 [2024-12-09 18:14:47.215346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.269 [2024-12-09 18:14:47.221022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.269 [2024-12-09 18:14:47.221274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.269 [2024-12-09 18:14:47.221303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.269 [2024-12-09 18:14:47.226622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.269 [2024-12-09 18:14:47.226851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.269 [2024-12-09 18:14:47.226879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.269 [2024-12-09 18:14:47.232628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.269 [2024-12-09 18:14:47.232913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.269 [2024-12-09 18:14:47.232942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.269 [2024-12-09 18:14:47.238178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.269 [2024-12-09 18:14:47.238294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.269 [2024-12-09 18:14:47.238321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.269 [2024-12-09 18:14:47.244114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.269 [2024-12-09 18:14:47.244282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.269 [2024-12-09 18:14:47.244309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.269 [2024-12-09 18:14:47.250149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.269 [2024-12-09 18:14:47.250255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.269 [2024-12-09 18:14:47.250289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.269 [2024-12-09 18:14:47.256190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.269 [2024-12-09 18:14:47.256332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.269 [2024-12-09 18:14:47.256361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.269 [2024-12-09 18:14:47.262276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.269 [2024-12-09 18:14:47.262482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.269 [2024-12-09 18:14:47.262511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.269 [2024-12-09 18:14:47.268089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.269 [2024-12-09 18:14:47.268250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.269 [2024-12-09 18:14:47.268278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.269 [2024-12-09 18:14:47.273965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.269 [2024-12-09 18:14:47.274126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.269 [2024-12-09 18:14:47.274154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.269 [2024-12-09 18:14:47.279833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.269 [2024-12-09 18:14:47.280014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.269 [2024-12-09 18:14:47.280043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.269 [2024-12-09 18:14:47.284849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.269 [2024-12-09 18:14:47.284981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.269 [2024-12-09 18:14:47.285010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.269 [2024-12-09 18:14:47.289297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.269 [2024-12-09 18:14:47.289450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.269 [2024-12-09 18:14:47.289479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.269 [2024-12-09 18:14:47.294357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.269 [2024-12-09 18:14:47.294541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.269 [2024-12-09 18:14:47.294577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.269 [2024-12-09 18:14:47.299455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.269 [2024-12-09 18:14:47.299613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.269 [2024-12-09 18:14:47.299642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.269 [2024-12-09 18:14:47.304834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.269 [2024-12-09 18:14:47.304945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.269 [2024-12-09 18:14:47.304973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.311081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.311292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.311320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.315863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.315966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.315994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.320031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.320158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.320185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.324600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.324704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.324732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.329538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.329624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.329651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.334050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.334214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.334242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.340278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.340475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.340503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.345236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.345425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.345453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.350334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.350498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.350526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.354639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.354776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.354804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.359625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.359765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.359793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.364979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.365047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.365073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.370810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.370909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.370936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.376786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.376914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.376941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.382650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.382808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.382836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.387779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.387877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.387910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.392759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.392881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.392910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.397105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.397276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.397304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.402158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.402345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.402374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.407170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.407319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.407347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.412876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.412953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.412979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.417556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.417769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.417797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.422651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.422845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.422872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.427764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.427937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.427974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.433723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.433915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.433950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.439100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.439252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.439280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.444144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.529 [2024-12-09 18:14:47.444323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-09 18:14:47.444352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.529 [2024-12-09 18:14:47.449254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.449414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.449441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.454344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.454494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.454522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.459395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.459590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.459618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.464471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.464657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.464685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.469556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.469734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.469761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.474659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.474815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.474855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.479748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.479904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.479932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.484928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.485106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.485134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.489939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.490055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.490084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.495015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.495143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.495171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.500081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.500211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.500239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.505187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.505302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.505330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.510252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.510371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.510399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.515389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.515586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.515614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.520531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.520674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.520707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.525610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.525730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.525758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.530705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.530823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.530858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.535865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.536031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.536059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.540990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.541127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.541155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.546056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.546182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.546210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.550993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.551153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.551180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.556083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.556238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.556265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.561163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.561299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.561327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.530 [2024-12-09 18:14:47.566675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.530 [2024-12-09 18:14:47.566902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-09 18:14:47.566929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.572341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.572503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.790 [2024-12-09 18:14:47.572531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.577435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.577592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.790 [2024-12-09 18:14:47.577620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.582679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.582778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.790 [2024-12-09 18:14:47.582806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.587745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.587873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.790 [2024-12-09 18:14:47.587901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.592863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.593067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.790 [2024-12-09 18:14:47.593096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.597996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.598145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.790 [2024-12-09 18:14:47.598173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.603008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.603099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.790 [2024-12-09 18:14:47.603125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.608040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.608177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.790 [2024-12-09 18:14:47.608205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.613196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.613388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.790 [2024-12-09 18:14:47.613416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.618211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.618341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.790 [2024-12-09 18:14:47.618369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.623457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.623555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.790 [2024-12-09 18:14:47.623583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.628505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.628651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.790 [2024-12-09 18:14:47.628679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.633582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.633709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.790 [2024-12-09 18:14:47.633737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.638648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.638793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.790 [2024-12-09 18:14:47.638821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.643624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.643744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.790 [2024-12-09 18:14:47.643772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.648694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.648832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.790 [2024-12-09 18:14:47.648859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.653848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.654023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.790 [2024-12-09 18:14:47.654056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.659029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.659226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.790 [2024-12-09 18:14:47.659254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.664031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.664194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.790 [2024-12-09 18:14:47.664222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.669258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.669394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.790 [2024-12-09 18:14:47.669422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.674338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.674477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.790 [2024-12-09 18:14:47.674505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.679406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.679567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.790 [2024-12-09 18:14:47.679596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.790 [2024-12-09 18:14:47.684642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.790 [2024-12-09 18:14:47.684775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.684804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.689655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.689834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.689863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.694796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.694932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.694960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.699868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.700012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.700039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.704954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.705092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.705120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.710094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.710289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.710316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.715215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.715381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.715409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.720394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.720586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.720615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.725484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.725673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.725701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.730597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.730751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.730779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.735630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.735830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.735857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.740677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.740792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.740819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.745719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.745871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.745899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.750806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.750952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.750979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.755834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.756026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.756054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.760931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.761105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.761132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.765961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.766057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.766083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.771001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.771147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.771175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.776073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.776216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.776244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.781149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.781296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.781324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.786164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.786361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.786395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.791167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.791338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.791366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.796192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.796284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.796311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.801311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.801486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.801513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.806351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.806456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.806483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.811350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.811518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.811552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.791 [2024-12-09 18:14:47.816457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.791 [2024-12-09 18:14:47.816606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.791 [2024-12-09 18:14:47.816634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.792 [2024-12-09 18:14:47.821552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.792 [2024-12-09 18:14:47.821687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.792 [2024-12-09 18:14:47.821715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.792 [2024-12-09 18:14:47.826637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:24.792 [2024-12-09 18:14:47.826766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.792 [2024-12-09 18:14:47.826793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.050 [2024-12-09 18:14:47.831715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:25.050 [2024-12-09 18:14:47.831850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.050 [2024-12-09 18:14:47.831877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.050 [2024-12-09 18:14:47.836908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:25.050 [2024-12-09 18:14:47.837058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.050 [2024-12-09 18:14:47.837086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.050 [2024-12-09 18:14:47.842124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:25.050 [2024-12-09 18:14:47.842264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.050 [2024-12-09 18:14:47.842292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.050 [2024-12-09 18:14:47.847157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:25.050 [2024-12-09 18:14:47.847334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.050 [2024-12-09 18:14:47.847362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.050 [2024-12-09 18:14:47.852143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:25.050 [2024-12-09 18:14:47.852308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.050 [2024-12-09 18:14:47.852335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.050 [2024-12-09 18:14:47.857227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:25.050 [2024-12-09 18:14:47.857378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.050 [2024-12-09 18:14:47.857406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.050 [2024-12-09 18:14:47.862294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:25.050 [2024-12-09 18:14:47.862458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.050 [2024-12-09 18:14:47.862486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.050 [2024-12-09 18:14:47.867380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:25.050 [2024-12-09 18:14:47.867552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.050 [2024-12-09 18:14:47.867580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.050 [2024-12-09 18:14:47.872433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:25.050 [2024-12-09 18:14:47.872610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.050 [2024-12-09 18:14:47.872640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.050 [2024-12-09 18:14:47.877589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:25.050 [2024-12-09 18:14:47.877697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.050 [2024-12-09 18:14:47.877725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.050 [2024-12-09 18:14:47.882781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:25.050 [2024-12-09 18:14:47.882905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.050 [2024-12-09 18:14:47.882932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.050 [2024-12-09 18:14:47.887823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:25.050 [2024-12-09 18:14:47.887983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.050 [2024-12-09 18:14:47.888011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.050 [2024-12-09 18:14:47.892871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:25.050 [2024-12-09 18:14:47.893057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.050 [2024-12-09 18:14:47.893085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.050 [2024-12-09 18:14:47.897876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:25.050 [2024-12-09 18:14:47.898024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.050 [2024-12-09 18:14:47.898051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.050 [2024-12-09 18:14:47.902952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:25.050 [2024-12-09 18:14:47.903107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.050 [2024-12-09 18:14:47.903136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.051 [2024-12-09 18:14:47.908038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce9170) with pdu=0x200016eff3c8 00:25:25.051 [2024-12-09 18:14:47.908181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.051 [2024-12-09 18:14:47.908210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.051 5860.50 IOPS, 732.56 MiB/s 00:25:25.051 Latency(us) 00:25:25.051 [2024-12-09T17:14:48.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.051 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:25.051 nvme0n1 : 2.00 5858.51 732.31 0.00 0.00 2723.75 1820.44 12524.66 00:25:25.051 [2024-12-09T17:14:48.092Z] =================================================================================================================== 00:25:25.051 [2024-12-09T17:14:48.092Z] Total : 5858.51 732.31 0.00 0.00 2723.75 1820.44 12524.66 00:25:25.051 { 00:25:25.051 "results": [ 00:25:25.051 { 00:25:25.051 "job": "nvme0n1", 00:25:25.051 "core_mask": "0x2", 00:25:25.051 "workload": "randwrite", 00:25:25.051 "status": "finished", 00:25:25.051 "queue_depth": 16, 00:25:25.051 "io_size": 131072, 00:25:25.051 "runtime": 2.004093, 00:25:25.051 "iops": 5858.510558142761, 00:25:25.051 "mibps": 732.3138197678451, 00:25:25.051 "io_failed": 0, 00:25:25.051 "io_timeout": 0, 00:25:25.051 "avg_latency_us": 2723.7519444050135, 00:25:25.051 "min_latency_us": 1820.4444444444443, 00:25:25.051 "max_latency_us": 12524.657777777778 00:25:25.051 } 00:25:25.051 ], 00:25:25.051 "core_count": 1 00:25:25.051 } 00:25:25.051 18:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:25.051 18:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:25.051 18:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:25.051 18:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:25.051 | .driver_specific 00:25:25.051 | .nvme_error 00:25:25.051 | .status_code 00:25:25.051 | .command_transient_transport_error' 00:25:25.310 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 379 > 0 )) 00:25:25.310 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1575574 00:25:25.310 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1575574 ']' 00:25:25.310 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1575574 00:25:25.310 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:25.310 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:25.310 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1575574 00:25:25.310 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:25.310 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:25.310 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1575574' 00:25:25.310 killing process with pid 1575574 00:25:25.310 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1575574 00:25:25.310 Received shutdown signal, test time was about 2.000000 seconds 00:25:25.310 00:25:25.310 Latency(us) 00:25:25.310 [2024-12-09T17:14:48.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.310 [2024-12-09T17:14:48.351Z] =================================================================================================================== 00:25:25.310 [2024-12-09T17:14:48.351Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:25.310 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1575574 00:25:25.569 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1574207 00:25:25.569 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1574207 ']' 00:25:25.569 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1574207 00:25:25.569 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:25.569 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:25.569 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1574207 00:25:25.569 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:25.569 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:25.569 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1574207' 00:25:25.569 killing process with pid 1574207 00:25:25.569 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1574207 00:25:25.569 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1574207 00:25:25.829 00:25:25.829 real 0m15.526s 00:25:25.829 user 0m31.179s 00:25:25.829 sys 0m4.293s 00:25:25.829 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:25.829 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:25.829 ************************************ 00:25:25.829 END TEST nvmf_digest_error 00:25:25.829 ************************************ 00:25:25.829 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:25.829 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:25.829 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:25.830 rmmod nvme_tcp 00:25:25.830 rmmod nvme_fabrics 00:25:25.830 rmmod nvme_keyring 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1574207 ']' 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1574207 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1574207 ']' 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1574207 00:25:25.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1574207) - No such process 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1574207 is not found' 00:25:25.830 Process with pid 1574207 is not found 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:25.830 18:14:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.370 18:14:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:28.370 00:25:28.370 real 0m35.917s 00:25:28.370 user 1m3.935s 00:25:28.370 sys 0m10.144s 00:25:28.370 18:14:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:28.370 18:14:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:28.370 ************************************ 00:25:28.370 END TEST nvmf_digest 00:25:28.370 ************************************ 00:25:28.370 18:14:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:25:28.370 18:14:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:25:28.370 18:14:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:25:28.370 18:14:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:28.370 18:14:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:28.370 18:14:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:28.370 18:14:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.370 ************************************ 00:25:28.370 START TEST nvmf_bdevperf 00:25:28.370 ************************************ 00:25:28.370 18:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:28.370 * Looking for test storage... 00:25:28.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:28.370 18:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:28.370 18:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:25:28.370 18:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:28.370 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:28.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.371 --rc genhtml_branch_coverage=1 00:25:28.371 --rc genhtml_function_coverage=1 00:25:28.371 --rc genhtml_legend=1 00:25:28.371 --rc geninfo_all_blocks=1 00:25:28.371 --rc geninfo_unexecuted_blocks=1 00:25:28.371 00:25:28.371 ' 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:28.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.371 --rc genhtml_branch_coverage=1 00:25:28.371 --rc genhtml_function_coverage=1 00:25:28.371 --rc genhtml_legend=1 00:25:28.371 --rc geninfo_all_blocks=1 00:25:28.371 --rc geninfo_unexecuted_blocks=1 00:25:28.371 00:25:28.371 ' 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:28.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.371 --rc genhtml_branch_coverage=1 00:25:28.371 --rc genhtml_function_coverage=1 00:25:28.371 --rc genhtml_legend=1 00:25:28.371 --rc geninfo_all_blocks=1 00:25:28.371 --rc geninfo_unexecuted_blocks=1 00:25:28.371 00:25:28.371 ' 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:28.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.371 --rc genhtml_branch_coverage=1 00:25:28.371 --rc genhtml_function_coverage=1 00:25:28.371 --rc genhtml_legend=1 00:25:28.371 --rc geninfo_all_blocks=1 00:25:28.371 --rc geninfo_unexecuted_blocks=1 00:25:28.371 00:25:28.371 ' 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:28.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:28.371 18:14:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:30.273 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:30.273 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:30.273 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:30.273 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:30.273 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:30.274 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:30.274 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:30.274 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:30.274 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:30.274 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:30.274 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:30.274 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:30.274 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:30.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:25:30.274 00:25:30.274 --- 10.0.0.2 ping statistics --- 00:25:30.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.274 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:25:30.274 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:30.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:25:30.274 00:25:30.274 --- 10.0.0.1 ping statistics --- 00:25:30.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.274 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:25:30.274 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.274 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:25:30.274 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:30.274 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.274 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:30.274 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:30.274 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.274 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:30.274 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:30.532 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:30.532 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:30.532 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:30.532 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:30.532 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.532 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1578053 00:25:30.533 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:30.533 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1578053 00:25:30.533 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1578053 ']' 00:25:30.533 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.533 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:30.533 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.533 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:30.533 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.533 [2024-12-09 18:14:53.388263] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:25:30.533 [2024-12-09 18:14:53.388340] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.533 [2024-12-09 18:14:53.458810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:30.533 [2024-12-09 18:14:53.514397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.533 [2024-12-09 18:14:53.514449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.533 [2024-12-09 18:14:53.514477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:30.533 [2024-12-09 18:14:53.514489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:30.533 [2024-12-09 18:14:53.514498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.533 [2024-12-09 18:14:53.516011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:30.533 [2024-12-09 18:14:53.516079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:30.533 [2024-12-09 18:14:53.516082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.793 [2024-12-09 18:14:53.649583] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.793 Malloc0 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.793 [2024-12-09 18:14:53.710575] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:30.793 { 00:25:30.793 "params": { 00:25:30.793 "name": "Nvme$subsystem", 00:25:30.793 "trtype": "$TEST_TRANSPORT", 00:25:30.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.793 "adrfam": "ipv4", 00:25:30.793 "trsvcid": "$NVMF_PORT", 00:25:30.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.793 "hdgst": ${hdgst:-false}, 00:25:30.793 "ddgst": ${ddgst:-false} 00:25:30.793 }, 00:25:30.793 "method": "bdev_nvme_attach_controller" 00:25:30.793 } 00:25:30.793 EOF 00:25:30.793 )") 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:30.793 18:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:30.793 "params": { 00:25:30.793 "name": "Nvme1", 00:25:30.793 "trtype": "tcp", 00:25:30.793 "traddr": "10.0.0.2", 00:25:30.793 "adrfam": "ipv4", 00:25:30.793 "trsvcid": "4420", 00:25:30.793 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.793 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:30.793 "hdgst": false, 00:25:30.793 "ddgst": false 00:25:30.793 }, 00:25:30.793 "method": "bdev_nvme_attach_controller" 00:25:30.793 }' 00:25:30.793 [2024-12-09 18:14:53.758662] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:25:30.793 [2024-12-09 18:14:53.758736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578080 ] 00:25:30.793 [2024-12-09 18:14:53.826559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.054 [2024-12-09 18:14:53.887988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.313 Running I/O for 1 seconds... 00:25:32.248 8130.00 IOPS, 31.76 MiB/s 00:25:32.248 Latency(us) 00:25:32.248 [2024-12-09T17:14:55.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.248 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:32.248 Verification LBA range: start 0x0 length 0x4000 00:25:32.248 Nvme1n1 : 1.01 8151.19 31.84 0.00 0.00 15631.89 2864.17 15922.82 00:25:32.248 [2024-12-09T17:14:55.289Z] =================================================================================================================== 00:25:32.248 [2024-12-09T17:14:55.289Z] Total : 8151.19 31.84 0.00 0.00 15631.89 2864.17 15922.82 00:25:32.506 18:14:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1578222 00:25:32.506 18:14:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:32.506 18:14:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:32.506 18:14:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:32.506 18:14:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:32.506 18:14:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:32.506 18:14:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:32.506 18:14:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:32.506 { 00:25:32.506 "params": { 00:25:32.506 "name": "Nvme$subsystem", 00:25:32.506 "trtype": "$TEST_TRANSPORT", 00:25:32.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.506 "adrfam": "ipv4", 00:25:32.506 "trsvcid": "$NVMF_PORT", 00:25:32.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.506 "hdgst": ${hdgst:-false}, 00:25:32.506 "ddgst": ${ddgst:-false} 00:25:32.506 }, 00:25:32.506 "method": "bdev_nvme_attach_controller" 00:25:32.506 } 00:25:32.506 EOF 00:25:32.506 )") 00:25:32.506 18:14:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:32.506 18:14:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:32.506 18:14:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:32.506 18:14:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:32.506 "params": { 00:25:32.506 "name": "Nvme1", 00:25:32.506 "trtype": "tcp", 00:25:32.506 "traddr": "10.0.0.2", 00:25:32.506 "adrfam": "ipv4", 00:25:32.506 "trsvcid": "4420", 00:25:32.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:32.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:32.506 "hdgst": false, 00:25:32.506 "ddgst": false 00:25:32.506 }, 00:25:32.506 "method": "bdev_nvme_attach_controller" 00:25:32.506 }' 00:25:32.506 [2024-12-09 18:14:55.436135] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:25:32.506 [2024-12-09 18:14:55.436211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578222 ] 00:25:32.506 [2024-12-09 18:14:55.508650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.766 [2024-12-09 18:14:55.566745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.766 Running I/O for 15 seconds... 00:25:35.080 7957.00 IOPS, 31.08 MiB/s [2024-12-09T17:14:58.694Z] 7983.00 IOPS, 31.18 MiB/s [2024-12-09T17:14:58.694Z] 18:14:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1578053 00:25:35.653 18:14:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:35.653 [2024-12-09 18:14:58.404837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.653 [2024-12-09 18:14:58.404904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.404936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.404968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.404986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.405976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.405988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.406001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.654 [2024-12-09 18:14:58.406013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.406027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.406039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.406052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.406064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.406078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.406090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.406103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.406115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.406129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.406141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.406154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.406166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.406179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.406191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.654 [2024-12-09 18:14:58.406205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.654 [2024-12-09 18:14:58.406216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.406987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.406999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.407016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.407028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.407042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.407054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.407067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:36240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.407079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.407093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.407104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.407118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:36256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.407130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.407143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.655 [2024-12-09 18:14:58.407155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.407168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.655 [2024-12-09 18:14:58.407180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.407193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.655 [2024-12-09 18:14:58.407205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.407218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.655 [2024-12-09 18:14:58.407230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.407243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.655 [2024-12-09 18:14:58.407255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.407269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.655 [2024-12-09 18:14:58.407281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.407294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.655 [2024-12-09 18:14:58.407306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.407319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.407335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.655 [2024-12-09 18:14:58.407349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.655 [2024-12-09 18:14:58.407361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.407374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.407386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.407400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.407412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.407426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:36296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.407438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.407451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:36304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.407463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.407477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:36312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.407489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.407502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.407514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.407551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.407567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.407583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.407597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.407619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:36344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.407632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.407647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.407660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.407682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:36360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.407696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.407715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:36368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.407729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.407744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.407757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.407771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.407785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.407799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.407813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.407843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:36400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.407856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.407870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:36408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.407882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.407910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.407922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.407936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.407948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.407961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:36432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.407973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.407986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:36440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.407998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.408012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.408024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.408037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:36456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.408050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.408063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.408078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.408092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:36472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.408105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.408119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:36480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.408131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.408145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.408157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.408171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:36496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.408182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.408196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.408208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.408222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.408234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.408247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.408260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.408273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.408285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.408299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.408311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.408325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:36544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.408337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.408351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.408363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.408377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.408390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.408403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.408418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.408433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:36576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.656 [2024-12-09 18:14:58.408445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.408459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.656 [2024-12-09 18:14:58.408471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.408485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:35632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.656 [2024-12-09 18:14:58.408498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.656 [2024-12-09 18:14:58.408511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.657 [2024-12-09 18:14:58.408523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.657 [2024-12-09 18:14:58.408561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.657 [2024-12-09 18:14:58.408576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.657 [2024-12-09 18:14:58.408591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.657 [2024-12-09 18:14:58.408611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.657 [2024-12-09 18:14:58.408626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:35664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.657 [2024-12-09 18:14:58.408639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.657 [2024-12-09 18:14:58.408654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:35672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.657 [2024-12-09 18:14:58.408667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.657 [2024-12-09 18:14:58.408688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e53a0 is same with the state(6) to be set 00:25:35.657 [2024-12-09 18:14:58.408707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:35.657 [2024-12-09 18:14:58.408718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:35.657 [2024-12-09 18:14:58.408729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35680 len:8 PRP1 0x0 PRP2 0x0 00:25:35.657 [2024-12-09 18:14:58.408741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.657 [2024-12-09 18:14:58.408883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.657 [2024-12-09 18:14:58.408918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.657 [2024-12-09 18:14:58.408932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.657 [2024-12-09 18:14:58.408945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.657 [2024-12-09 18:14:58.408982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.657 [2024-12-09 18:14:58.408995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.657 [2024-12-09 18:14:58.409008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.657 [2024-12-09 18:14:58.409020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.657 [2024-12-09 18:14:58.409032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.657 [2024-12-09 18:14:58.412186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.657 [2024-12-09 18:14:58.412222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.657 [2024-12-09 18:14:58.413098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.657 [2024-12-09 18:14:58.413127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.657 [2024-12-09 18:14:58.413143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.657 [2024-12-09 18:14:58.413362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.657 [2024-12-09 18:14:58.413615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.657 [2024-12-09 18:14:58.413638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.657 [2024-12-09 18:14:58.413654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.657 [2024-12-09 18:14:58.413669] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.657 [2024-12-09 18:14:58.425505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.657 [2024-12-09 18:14:58.425853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.657 [2024-12-09 18:14:58.425881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.657 [2024-12-09 18:14:58.425896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.657 [2024-12-09 18:14:58.426098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.657 [2024-12-09 18:14:58.426309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.657 [2024-12-09 18:14:58.426328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.657 [2024-12-09 18:14:58.426340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.657 [2024-12-09 18:14:58.426351] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.657 [2024-12-09 18:14:58.438735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.657 [2024-12-09 18:14:58.439127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.657 [2024-12-09 18:14:58.439169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.657 [2024-12-09 18:14:58.439184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.657 [2024-12-09 18:14:58.439439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.657 [2024-12-09 18:14:58.439690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.657 [2024-12-09 18:14:58.439712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.657 [2024-12-09 18:14:58.439726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.657 [2024-12-09 18:14:58.439738] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.657 [2024-12-09 18:14:58.451786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.657 [2024-12-09 18:14:58.452280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.657 [2024-12-09 18:14:58.452321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.657 [2024-12-09 18:14:58.452338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.657 [2024-12-09 18:14:58.452665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.657 [2024-12-09 18:14:58.452861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.657 [2024-12-09 18:14:58.452880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.657 [2024-12-09 18:14:58.452892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.657 [2024-12-09 18:14:58.452902] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.657 [2024-12-09 18:14:58.464928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.657 [2024-12-09 18:14:58.465256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.657 [2024-12-09 18:14:58.465282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.657 [2024-12-09 18:14:58.465297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.657 [2024-12-09 18:14:58.465514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.657 [2024-12-09 18:14:58.465747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.657 [2024-12-09 18:14:58.465769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.657 [2024-12-09 18:14:58.465781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.657 [2024-12-09 18:14:58.465793] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.657 [2024-12-09 18:14:58.478159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.657 [2024-12-09 18:14:58.478597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.657 [2024-12-09 18:14:58.478625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.657 [2024-12-09 18:14:58.478641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.657 [2024-12-09 18:14:58.478884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.657 [2024-12-09 18:14:58.479078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.657 [2024-12-09 18:14:58.479096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.657 [2024-12-09 18:14:58.479114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.657 [2024-12-09 18:14:58.479126] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.657 [2024-12-09 18:14:58.491553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.657 [2024-12-09 18:14:58.492006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.657 [2024-12-09 18:14:58.492034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.657 [2024-12-09 18:14:58.492050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.657 [2024-12-09 18:14:58.492294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.657 [2024-12-09 18:14:58.492511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.657 [2024-12-09 18:14:58.492554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.657 [2024-12-09 18:14:58.492578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.657 [2024-12-09 18:14:58.492593] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.657 [2024-12-09 18:14:58.504873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.657 [2024-12-09 18:14:58.505360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.657 [2024-12-09 18:14:58.505388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.658 [2024-12-09 18:14:58.505404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.658 [2024-12-09 18:14:58.505630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.658 [2024-12-09 18:14:58.505868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.658 [2024-12-09 18:14:58.505888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.658 [2024-12-09 18:14:58.505916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.658 [2024-12-09 18:14:58.505927] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.658 [2024-12-09 18:14:58.518140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.658 [2024-12-09 18:14:58.518539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.658 [2024-12-09 18:14:58.518580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.658 [2024-12-09 18:14:58.518598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.658 [2024-12-09 18:14:58.518815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.658 [2024-12-09 18:14:58.519050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.658 [2024-12-09 18:14:58.519069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.658 [2024-12-09 18:14:58.519081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.658 [2024-12-09 18:14:58.519093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.658 [2024-12-09 18:14:58.531472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.658 [2024-12-09 18:14:58.531927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.658 [2024-12-09 18:14:58.531955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.658 [2024-12-09 18:14:58.531986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.658 [2024-12-09 18:14:58.532228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.658 [2024-12-09 18:14:58.532428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.658 [2024-12-09 18:14:58.532446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.658 [2024-12-09 18:14:58.532459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.658 [2024-12-09 18:14:58.532470] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.658 [2024-12-09 18:14:58.544838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.658 [2024-12-09 18:14:58.545295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.658 [2024-12-09 18:14:58.545337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.658 [2024-12-09 18:14:58.545354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.658 [2024-12-09 18:14:58.545627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.658 [2024-12-09 18:14:58.545835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.658 [2024-12-09 18:14:58.545868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.658 [2024-12-09 18:14:58.545881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.658 [2024-12-09 18:14:58.545893] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.658 [2024-12-09 18:14:58.558217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.658 [2024-12-09 18:14:58.558618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.658 [2024-12-09 18:14:58.558647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.658 [2024-12-09 18:14:58.558663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.658 [2024-12-09 18:14:58.558899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.658 [2024-12-09 18:14:58.559115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.658 [2024-12-09 18:14:58.559134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.658 [2024-12-09 18:14:58.559147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.658 [2024-12-09 18:14:58.559158] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.658 [2024-12-09 18:14:58.571495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.658 [2024-12-09 18:14:58.571883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.658 [2024-12-09 18:14:58.571916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.658 [2024-12-09 18:14:58.571933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.658 [2024-12-09 18:14:58.572176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.658 [2024-12-09 18:14:58.572377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.658 [2024-12-09 18:14:58.572395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.658 [2024-12-09 18:14:58.572407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.658 [2024-12-09 18:14:58.572418] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.658 [2024-12-09 18:14:58.584797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.658 [2024-12-09 18:14:58.585190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.658 [2024-12-09 18:14:58.585219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.658 [2024-12-09 18:14:58.585235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.658 [2024-12-09 18:14:58.585479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.658 [2024-12-09 18:14:58.585731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.658 [2024-12-09 18:14:58.585754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.658 [2024-12-09 18:14:58.585767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.658 [2024-12-09 18:14:58.585779] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.658 [2024-12-09 18:14:58.598154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.658 [2024-12-09 18:14:58.598525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.658 [2024-12-09 18:14:58.598560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.658 [2024-12-09 18:14:58.598577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.658 [2024-12-09 18:14:58.598807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.658 [2024-12-09 18:14:58.599039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.658 [2024-12-09 18:14:58.599058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.658 [2024-12-09 18:14:58.599071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.658 [2024-12-09 18:14:58.599082] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.658 [2024-12-09 18:14:58.611485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.658 [2024-12-09 18:14:58.611858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.658 [2024-12-09 18:14:58.611886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.658 [2024-12-09 18:14:58.611902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.658 [2024-12-09 18:14:58.612138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.658 [2024-12-09 18:14:58.612355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.658 [2024-12-09 18:14:58.612373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.658 [2024-12-09 18:14:58.612385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.659 [2024-12-09 18:14:58.612397] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.659 [2024-12-09 18:14:58.624787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.659 [2024-12-09 18:14:58.625149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.659 [2024-12-09 18:14:58.625177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.659 [2024-12-09 18:14:58.625193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.659 [2024-12-09 18:14:58.625437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.659 [2024-12-09 18:14:58.625687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.659 [2024-12-09 18:14:58.625708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.659 [2024-12-09 18:14:58.625721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.659 [2024-12-09 18:14:58.625732] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.659 [2024-12-09 18:14:58.638102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.659 [2024-12-09 18:14:58.638564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.659 [2024-12-09 18:14:58.638593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.659 [2024-12-09 18:14:58.638609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.659 [2024-12-09 18:14:58.638855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.659 [2024-12-09 18:14:58.639072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.659 [2024-12-09 18:14:58.639090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.659 [2024-12-09 18:14:58.639103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.659 [2024-12-09 18:14:58.639114] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.659 [2024-12-09 18:14:58.651448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.659 [2024-12-09 18:14:58.651855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.659 [2024-12-09 18:14:58.651897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.659 [2024-12-09 18:14:58.651913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.659 [2024-12-09 18:14:58.652168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.659 [2024-12-09 18:14:58.652367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.659 [2024-12-09 18:14:58.652386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.659 [2024-12-09 18:14:58.652403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.659 [2024-12-09 18:14:58.652415] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.659 [2024-12-09 18:14:58.664811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.659 [2024-12-09 18:14:58.665262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.659 [2024-12-09 18:14:58.665290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.659 [2024-12-09 18:14:58.665306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.659 [2024-12-09 18:14:58.665535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.659 [2024-12-09 18:14:58.665764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.659 [2024-12-09 18:14:58.665784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.659 [2024-12-09 18:14:58.665798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.659 [2024-12-09 18:14:58.665810] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.659 [2024-12-09 18:14:58.678425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.659 [2024-12-09 18:14:58.678822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.659 [2024-12-09 18:14:58.678851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.659 [2024-12-09 18:14:58.678868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.659 [2024-12-09 18:14:58.679084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.659 [2024-12-09 18:14:58.679344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.659 [2024-12-09 18:14:58.679364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.659 [2024-12-09 18:14:58.679377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.659 [2024-12-09 18:14:58.679389] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.919 [2024-12-09 18:14:58.692099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.919 [2024-12-09 18:14:58.692487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-12-09 18:14:58.692517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.919 [2024-12-09 18:14:58.692533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.919 [2024-12-09 18:14:58.692776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.919 [2024-12-09 18:14:58.693014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.919 [2024-12-09 18:14:58.693033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.919 [2024-12-09 18:14:58.693045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.919 [2024-12-09 18:14:58.693057] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.919 [2024-12-09 18:14:58.705508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.919 [2024-12-09 18:14:58.705905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-12-09 18:14:58.705935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.919 [2024-12-09 18:14:58.705951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.919 [2024-12-09 18:14:58.706180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.919 [2024-12-09 18:14:58.706396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.919 [2024-12-09 18:14:58.706415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.919 [2024-12-09 18:14:58.706427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.919 [2024-12-09 18:14:58.706438] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.919 [2024-12-09 18:14:58.718815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.919 [2024-12-09 18:14:58.719214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-12-09 18:14:58.719241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.920 [2024-12-09 18:14:58.719257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.920 [2024-12-09 18:14:58.719494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.920 [2024-12-09 18:14:58.719728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.920 [2024-12-09 18:14:58.719750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.920 [2024-12-09 18:14:58.719763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.920 [2024-12-09 18:14:58.719775] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.920 [2024-12-09 18:14:58.732145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.920 [2024-12-09 18:14:58.732590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-12-09 18:14:58.732618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.920 [2024-12-09 18:14:58.732634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.920 [2024-12-09 18:14:58.732866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.920 [2024-12-09 18:14:58.733083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.920 [2024-12-09 18:14:58.733102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.920 [2024-12-09 18:14:58.733114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.920 [2024-12-09 18:14:58.733125] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.920 [2024-12-09 18:14:58.745453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.920 [2024-12-09 18:14:58.745838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-12-09 18:14:58.745872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.920 [2024-12-09 18:14:58.745889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.920 [2024-12-09 18:14:58.746133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.920 [2024-12-09 18:14:58.746332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.920 [2024-12-09 18:14:58.746351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.920 [2024-12-09 18:14:58.746363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.920 [2024-12-09 18:14:58.746374] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.920 [2024-12-09 18:14:58.758766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.920 [2024-12-09 18:14:58.759201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-12-09 18:14:58.759244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.920 [2024-12-09 18:14:58.759261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.920 [2024-12-09 18:14:58.759489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.920 [2024-12-09 18:14:58.759737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.920 [2024-12-09 18:14:58.759758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.920 [2024-12-09 18:14:58.759771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.920 [2024-12-09 18:14:58.759783] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.920 [2024-12-09 18:14:58.772024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.920 [2024-12-09 18:14:58.772412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-12-09 18:14:58.772440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.920 [2024-12-09 18:14:58.772455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.920 [2024-12-09 18:14:58.772700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.920 [2024-12-09 18:14:58.772948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.920 [2024-12-09 18:14:58.772967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.920 [2024-12-09 18:14:58.772979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.920 [2024-12-09 18:14:58.772990] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.920 [2024-12-09 18:14:58.785305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.920 [2024-12-09 18:14:58.785665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-12-09 18:14:58.785693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.920 [2024-12-09 18:14:58.785709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.920 [2024-12-09 18:14:58.785947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.920 [2024-12-09 18:14:58.786164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.920 [2024-12-09 18:14:58.786182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.920 [2024-12-09 18:14:58.786195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.920 [2024-12-09 18:14:58.786206] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.920 6943.00 IOPS, 27.12 MiB/s [2024-12-09T17:14:58.961Z] [2024-12-09 18:14:58.800141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.920 [2024-12-09 18:14:58.800453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-12-09 18:14:58.800494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.920 [2024-12-09 18:14:58.800510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.920 [2024-12-09 18:14:58.800749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.920 [2024-12-09 18:14:58.800967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.920 [2024-12-09 18:14:58.800986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.920 [2024-12-09 18:14:58.800998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.920 [2024-12-09 18:14:58.801010] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.920 [2024-12-09 18:14:58.813642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.920 [2024-12-09 18:14:58.814037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-12-09 18:14:58.814079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.920 [2024-12-09 18:14:58.814096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.920 [2024-12-09 18:14:58.814351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.920 [2024-12-09 18:14:58.814582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.920 [2024-12-09 18:14:58.814619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.920 [2024-12-09 18:14:58.814632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.920 [2024-12-09 18:14:58.814644] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.920 [2024-12-09 18:14:58.826873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.920 [2024-12-09 18:14:58.827198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-12-09 18:14:58.827239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.920 [2024-12-09 18:14:58.827255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.920 [2024-12-09 18:14:58.827479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.920 [2024-12-09 18:14:58.827730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.920 [2024-12-09 18:14:58.827756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.920 [2024-12-09 18:14:58.827770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.920 [2024-12-09 18:14:58.827781] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.920 [2024-12-09 18:14:58.840165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.920 [2024-12-09 18:14:58.840616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-12-09 18:14:58.840645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.920 [2024-12-09 18:14:58.840661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.920 [2024-12-09 18:14:58.840905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.920 [2024-12-09 18:14:58.841105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.920 [2024-12-09 18:14:58.841123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.920 [2024-12-09 18:14:58.841136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.920 [2024-12-09 18:14:58.841147] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.920 [2024-12-09 18:14:58.853535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.920 [2024-12-09 18:14:58.853943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-12-09 18:14:58.853971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.920 [2024-12-09 18:14:58.853987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.920 [2024-12-09 18:14:58.854217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.921 [2024-12-09 18:14:58.854433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.921 [2024-12-09 18:14:58.854452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.921 [2024-12-09 18:14:58.854464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.921 [2024-12-09 18:14:58.854475] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.921 [2024-12-09 18:14:58.866981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.921 [2024-12-09 18:14:58.867295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-12-09 18:14:58.867336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.921 [2024-12-09 18:14:58.867352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.921 [2024-12-09 18:14:58.867608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.921 [2024-12-09 18:14:58.867833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.921 [2024-12-09 18:14:58.867866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.921 [2024-12-09 18:14:58.867879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.921 [2024-12-09 18:14:58.867890] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.921 [2024-12-09 18:14:58.880681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.921 [2024-12-09 18:14:58.881100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-12-09 18:14:58.881142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.921 [2024-12-09 18:14:58.881159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.921 [2024-12-09 18:14:58.881404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.921 [2024-12-09 18:14:58.881647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.921 [2024-12-09 18:14:58.881668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.921 [2024-12-09 18:14:58.881680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.921 [2024-12-09 18:14:58.881692] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.921 [2024-12-09 18:14:58.893911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.921 [2024-12-09 18:14:58.894244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-12-09 18:14:58.894272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.921 [2024-12-09 18:14:58.894288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.921 [2024-12-09 18:14:58.894512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.921 [2024-12-09 18:14:58.894737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.921 [2024-12-09 18:14:58.894758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.921 [2024-12-09 18:14:58.894771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.921 [2024-12-09 18:14:58.894782] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.921 [2024-12-09 18:14:58.907208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.921 [2024-12-09 18:14:58.907626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-12-09 18:14:58.907656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.921 [2024-12-09 18:14:58.907672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.921 [2024-12-09 18:14:58.907902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.921 [2024-12-09 18:14:58.908119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.921 [2024-12-09 18:14:58.908138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.921 [2024-12-09 18:14:58.908150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.921 [2024-12-09 18:14:58.908162] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.921 [2024-12-09 18:14:58.920472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.921 [2024-12-09 18:14:58.920842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-12-09 18:14:58.920876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.921 [2024-12-09 18:14:58.920894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.921 [2024-12-09 18:14:58.921134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.921 [2024-12-09 18:14:58.921373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.921 [2024-12-09 18:14:58.921392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.921 [2024-12-09 18:14:58.921405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.921 [2024-12-09 18:14:58.921417] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.921 [2024-12-09 18:14:58.933840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.921 [2024-12-09 18:14:58.934262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-12-09 18:14:58.934290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.921 [2024-12-09 18:14:58.934306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.921 [2024-12-09 18:14:58.934538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.921 [2024-12-09 18:14:58.934784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.921 [2024-12-09 18:14:58.934805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.921 [2024-12-09 18:14:58.934819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.921 [2024-12-09 18:14:58.934845] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.921 [2024-12-09 18:14:58.947204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.921 [2024-12-09 18:14:58.947601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-12-09 18:14:58.947630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:35.921 [2024-12-09 18:14:58.947646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:35.921 [2024-12-09 18:14:58.947878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:35.921 [2024-12-09 18:14:58.948095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.921 [2024-12-09 18:14:58.948114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.921 [2024-12-09 18:14:58.948126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.921 [2024-12-09 18:14:58.948138] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.182 [2024-12-09 18:14:58.960598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.182 [2024-12-09 18:14:58.960978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-12-09 18:14:58.961008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.182 [2024-12-09 18:14:58.961025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.182 [2024-12-09 18:14:58.961246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.182 [2024-12-09 18:14:58.961510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.182 [2024-12-09 18:14:58.961531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.182 [2024-12-09 18:14:58.961572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.182 [2024-12-09 18:14:58.961587] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.182 [2024-12-09 18:14:58.974236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.182 [2024-12-09 18:14:58.974574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-12-09 18:14:58.974604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.182 [2024-12-09 18:14:58.974621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.182 [2024-12-09 18:14:58.974854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.182 [2024-12-09 18:14:58.975078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.182 [2024-12-09 18:14:58.975097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.182 [2024-12-09 18:14:58.975110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.182 [2024-12-09 18:14:58.975122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.182 [2024-12-09 18:14:58.987686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.182 [2024-12-09 18:14:58.988144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-12-09 18:14:58.988171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.182 [2024-12-09 18:14:58.988187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.182 [2024-12-09 18:14:58.988410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.182 [2024-12-09 18:14:58.988677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.182 [2024-12-09 18:14:58.988699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.182 [2024-12-09 18:14:58.988713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.182 [2024-12-09 18:14:58.988726] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.182 [2024-12-09 18:14:59.001120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.182 [2024-12-09 18:14:59.001499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-12-09 18:14:59.001527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.182 [2024-12-09 18:14:59.001551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.182 [2024-12-09 18:14:59.001770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.182 [2024-12-09 18:14:59.002008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.182 [2024-12-09 18:14:59.002036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.182 [2024-12-09 18:14:59.002050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.182 [2024-12-09 18:14:59.002061] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.182 [2024-12-09 18:14:59.014560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.182 [2024-12-09 18:14:59.014955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-12-09 18:14:59.014983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.182 [2024-12-09 18:14:59.014999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.182 [2024-12-09 18:14:59.015224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.182 [2024-12-09 18:14:59.015440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.182 [2024-12-09 18:14:59.015459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.182 [2024-12-09 18:14:59.015472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.182 [2024-12-09 18:14:59.015483] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.182 [2024-12-09 18:14:59.027912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.182 [2024-12-09 18:14:59.028285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-12-09 18:14:59.028327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.182 [2024-12-09 18:14:59.028343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.182 [2024-12-09 18:14:59.028595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.182 [2024-12-09 18:14:59.028824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.182 [2024-12-09 18:14:59.028844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.182 [2024-12-09 18:14:59.028872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.182 [2024-12-09 18:14:59.028884] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.182 [2024-12-09 18:14:59.041249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.182 [2024-12-09 18:14:59.041650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.182 [2024-12-09 18:14:59.041679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.182 [2024-12-09 18:14:59.041695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.182 [2024-12-09 18:14:59.041921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.182 [2024-12-09 18:14:59.042136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.182 [2024-12-09 18:14:59.042170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.182 [2024-12-09 18:14:59.042183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.182 [2024-12-09 18:14:59.042195] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.182 [2024-12-09 18:14:59.054626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.182 [2024-12-09 18:14:59.055133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-12-09 18:14:59.055176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.183 [2024-12-09 18:14:59.055192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.183 [2024-12-09 18:14:59.055433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.183 [2024-12-09 18:14:59.055686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.183 [2024-12-09 18:14:59.055707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.183 [2024-12-09 18:14:59.055720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.183 [2024-12-09 18:14:59.055732] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.183 [2024-12-09 18:14:59.068026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.183 [2024-12-09 18:14:59.068452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-12-09 18:14:59.068495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.183 [2024-12-09 18:14:59.068512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.183 [2024-12-09 18:14:59.068763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.183 [2024-12-09 18:14:59.068981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.183 [2024-12-09 18:14:59.068999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.183 [2024-12-09 18:14:59.069012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.183 [2024-12-09 18:14:59.069023] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.183 [2024-12-09 18:14:59.081367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.183 [2024-12-09 18:14:59.081703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-12-09 18:14:59.081731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.183 [2024-12-09 18:14:59.081747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.183 [2024-12-09 18:14:59.081971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.183 [2024-12-09 18:14:59.082172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.183 [2024-12-09 18:14:59.082190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.183 [2024-12-09 18:14:59.082203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.183 [2024-12-09 18:14:59.082214] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.183 [2024-12-09 18:14:59.094770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.183 [2024-12-09 18:14:59.095167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-12-09 18:14:59.095202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.183 [2024-12-09 18:14:59.095219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.183 [2024-12-09 18:14:59.095463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.183 [2024-12-09 18:14:59.095709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.183 [2024-12-09 18:14:59.095731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.183 [2024-12-09 18:14:59.095744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.183 [2024-12-09 18:14:59.095756] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.183 [2024-12-09 18:14:59.108118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.183 [2024-12-09 18:14:59.108519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-12-09 18:14:59.108569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.183 [2024-12-09 18:14:59.108585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.183 [2024-12-09 18:14:59.108829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.183 [2024-12-09 18:14:59.109046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.183 [2024-12-09 18:14:59.109065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.183 [2024-12-09 18:14:59.109077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.183 [2024-12-09 18:14:59.109088] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.183 [2024-12-09 18:14:59.121387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.183 [2024-12-09 18:14:59.121792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-12-09 18:14:59.121821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.183 [2024-12-09 18:14:59.121837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.183 [2024-12-09 18:14:59.122080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.183 [2024-12-09 18:14:59.122296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.183 [2024-12-09 18:14:59.122315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.183 [2024-12-09 18:14:59.122327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.183 [2024-12-09 18:14:59.122339] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.183 [2024-12-09 18:14:59.134760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.183 [2024-12-09 18:14:59.135135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-12-09 18:14:59.135163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.183 [2024-12-09 18:14:59.135179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.183 [2024-12-09 18:14:59.135428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.183 [2024-12-09 18:14:59.135664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.183 [2024-12-09 18:14:59.135685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.183 [2024-12-09 18:14:59.135698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.183 [2024-12-09 18:14:59.135710] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.183 [2024-12-09 18:14:59.148056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.183 [2024-12-09 18:14:59.148457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-12-09 18:14:59.148485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.183 [2024-12-09 18:14:59.148502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.183 [2024-12-09 18:14:59.148738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.183 [2024-12-09 18:14:59.148994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.183 [2024-12-09 18:14:59.149013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.183 [2024-12-09 18:14:59.149025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.183 [2024-12-09 18:14:59.149036] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.183 [2024-12-09 18:14:59.161352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.183 [2024-12-09 18:14:59.161787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-12-09 18:14:59.161831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.183 [2024-12-09 18:14:59.161846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.183 [2024-12-09 18:14:59.162099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.183 [2024-12-09 18:14:59.162299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.183 [2024-12-09 18:14:59.162318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.183 [2024-12-09 18:14:59.162330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.183 [2024-12-09 18:14:59.162342] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.183 [2024-12-09 18:14:59.174914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.183 [2024-12-09 18:14:59.175287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-12-09 18:14:59.175315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.183 [2024-12-09 18:14:59.175331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.183 [2024-12-09 18:14:59.175556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.183 [2024-12-09 18:14:59.175792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.183 [2024-12-09 18:14:59.175817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.183 [2024-12-09 18:14:59.175846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.183 [2024-12-09 18:14:59.175858] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.183 [2024-12-09 18:14:59.188360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.183 [2024-12-09 18:14:59.188744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.183 [2024-12-09 18:14:59.188772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.183 [2024-12-09 18:14:59.188788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.184 [2024-12-09 18:14:59.189018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.184 [2024-12-09 18:14:59.189234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.184 [2024-12-09 18:14:59.189252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.184 [2024-12-09 18:14:59.189265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.184 [2024-12-09 18:14:59.189276] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.184 [2024-12-09 18:14:59.201925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.184 [2024-12-09 18:14:59.202304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.184 [2024-12-09 18:14:59.202346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.184 [2024-12-09 18:14:59.202362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.184 [2024-12-09 18:14:59.202634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.184 [2024-12-09 18:14:59.202855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.184 [2024-12-09 18:14:59.202890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.184 [2024-12-09 18:14:59.202902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.184 [2024-12-09 18:14:59.202914] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.184 [2024-12-09 18:14:59.215410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.184 [2024-12-09 18:14:59.215773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.184 [2024-12-09 18:14:59.215810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.184 [2024-12-09 18:14:59.215826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.184 [2024-12-09 18:14:59.216063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.184 [2024-12-09 18:14:59.216286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.184 [2024-12-09 18:14:59.216306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.184 [2024-12-09 18:14:59.216319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.184 [2024-12-09 18:14:59.216330] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.458 [2024-12-09 18:14:59.228998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.458 [2024-12-09 18:14:59.229379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.458 [2024-12-09 18:14:59.229407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.458 [2024-12-09 18:14:59.229423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.458 [2024-12-09 18:14:59.229663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.458 [2024-12-09 18:14:59.229910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.458 [2024-12-09 18:14:59.229929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.458 [2024-12-09 18:14:59.229941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.459 [2024-12-09 18:14:59.229953] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.459 [2024-12-09 18:14:59.242355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.459 [2024-12-09 18:14:59.242720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.459 [2024-12-09 18:14:59.242748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.459 [2024-12-09 18:14:59.242765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.459 [2024-12-09 18:14:59.243017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.459 [2024-12-09 18:14:59.243217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.459 [2024-12-09 18:14:59.243236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.459 [2024-12-09 18:14:59.243248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.459 [2024-12-09 18:14:59.243260] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.459 [2024-12-09 18:14:59.255629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.459 [2024-12-09 18:14:59.256030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.459 [2024-12-09 18:14:59.256057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.459 [2024-12-09 18:14:59.256073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.459 [2024-12-09 18:14:59.256310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.459 [2024-12-09 18:14:59.256511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.459 [2024-12-09 18:14:59.256530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.459 [2024-12-09 18:14:59.256542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.459 [2024-12-09 18:14:59.256596] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.459 [2024-12-09 18:14:59.269023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.459 [2024-12-09 18:14:59.269465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.459 [2024-12-09 18:14:59.269499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.459 [2024-12-09 18:14:59.269516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.459 [2024-12-09 18:14:59.269774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.459 [2024-12-09 18:14:59.269993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.459 [2024-12-09 18:14:59.270012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.459 [2024-12-09 18:14:59.270024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.459 [2024-12-09 18:14:59.270034] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.459 [2024-12-09 18:14:59.282346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.459 [2024-12-09 18:14:59.282703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.459 [2024-12-09 18:14:59.282732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.459 [2024-12-09 18:14:59.282748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.459 [2024-12-09 18:14:59.282979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.459 [2024-12-09 18:14:59.283195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.459 [2024-12-09 18:14:59.283215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.459 [2024-12-09 18:14:59.283227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.459 [2024-12-09 18:14:59.283238] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.459 [2024-12-09 18:14:59.295678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.459 [2024-12-09 18:14:59.296075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.459 [2024-12-09 18:14:59.296103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.459 [2024-12-09 18:14:59.296118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.459 [2024-12-09 18:14:59.296362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.459 [2024-12-09 18:14:59.296600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.459 [2024-12-09 18:14:59.296621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.459 [2024-12-09 18:14:59.296634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.459 [2024-12-09 18:14:59.296646] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.459 [2024-12-09 18:14:59.309032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.459 [2024-12-09 18:14:59.309379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.459 [2024-12-09 18:14:59.309407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.459 [2024-12-09 18:14:59.309423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.459 [2024-12-09 18:14:59.309677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.459 [2024-12-09 18:14:59.309925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.459 [2024-12-09 18:14:59.309944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.459 [2024-12-09 18:14:59.309957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.459 [2024-12-09 18:14:59.309969] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.459 [2024-12-09 18:14:59.322258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.459 [2024-12-09 18:14:59.322701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.459 [2024-12-09 18:14:59.322730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.459 [2024-12-09 18:14:59.322746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.459 [2024-12-09 18:14:59.322989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.459 [2024-12-09 18:14:59.323189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.459 [2024-12-09 18:14:59.323208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.459 [2024-12-09 18:14:59.323219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.459 [2024-12-09 18:14:59.323231] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.459 [2024-12-09 18:14:59.335577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.459 [2024-12-09 18:14:59.335917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.459 [2024-12-09 18:14:59.335945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.459 [2024-12-09 18:14:59.335961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.459 [2024-12-09 18:14:59.336184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.459 [2024-12-09 18:14:59.336400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.459 [2024-12-09 18:14:59.336418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.460 [2024-12-09 18:14:59.336430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.460 [2024-12-09 18:14:59.336441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.460 [2024-12-09 18:14:59.348863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.460 [2024-12-09 18:14:59.349278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.460 [2024-12-09 18:14:59.349305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.460 [2024-12-09 18:14:59.349321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.460 [2024-12-09 18:14:59.349554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.460 [2024-12-09 18:14:59.349767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.460 [2024-12-09 18:14:59.349791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.460 [2024-12-09 18:14:59.349805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.460 [2024-12-09 18:14:59.349816] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.460 [2024-12-09 18:14:59.362217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.460 [2024-12-09 18:14:59.362639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.460 [2024-12-09 18:14:59.362666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.460 [2024-12-09 18:14:59.362698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.460 [2024-12-09 18:14:59.362942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.460 [2024-12-09 18:14:59.363141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.460 [2024-12-09 18:14:59.363160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.460 [2024-12-09 18:14:59.363172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.460 [2024-12-09 18:14:59.363183] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.460 [2024-12-09 18:14:59.375512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.460 [2024-12-09 18:14:59.375922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.460 [2024-12-09 18:14:59.375950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.460 [2024-12-09 18:14:59.375966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.460 [2024-12-09 18:14:59.376211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.460 [2024-12-09 18:14:59.376411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.460 [2024-12-09 18:14:59.376429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.460 [2024-12-09 18:14:59.376442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.460 [2024-12-09 18:14:59.376453] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.460 [2024-12-09 18:14:59.388838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.460 [2024-12-09 18:14:59.389280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.460 [2024-12-09 18:14:59.389309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.460 [2024-12-09 18:14:59.389325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.460 [2024-12-09 18:14:59.389583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.460 [2024-12-09 18:14:59.389791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.460 [2024-12-09 18:14:59.389810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.460 [2024-12-09 18:14:59.389823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.460 [2024-12-09 18:14:59.389849] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.460 [2024-12-09 18:14:59.402134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.460 [2024-12-09 18:14:59.402535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.460 [2024-12-09 18:14:59.402573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.460 [2024-12-09 18:14:59.402605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.460 [2024-12-09 18:14:59.402851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.460 [2024-12-09 18:14:59.403064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.460 [2024-12-09 18:14:59.403083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.460 [2024-12-09 18:14:59.403095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.460 [2024-12-09 18:14:59.403105] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.460 [2024-12-09 18:14:59.415460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.460 [2024-12-09 18:14:59.415854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.460 [2024-12-09 18:14:59.415896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.460 [2024-12-09 18:14:59.415912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.460 [2024-12-09 18:14:59.416148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.460 [2024-12-09 18:14:59.416343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.460 [2024-12-09 18:14:59.416361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.460 [2024-12-09 18:14:59.416373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.460 [2024-12-09 18:14:59.416383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.460 [2024-12-09 18:14:59.428822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.460 [2024-12-09 18:14:59.429212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.460 [2024-12-09 18:14:59.429240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.460 [2024-12-09 18:14:59.429256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.460 [2024-12-09 18:14:59.429486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.460 [2024-12-09 18:14:59.429758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.460 [2024-12-09 18:14:59.429781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.460 [2024-12-09 18:14:59.429795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.460 [2024-12-09 18:14:59.429807] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.460 [2024-12-09 18:14:59.442062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.460 [2024-12-09 18:14:59.442432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.460 [2024-12-09 18:14:59.442464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.460 [2024-12-09 18:14:59.442481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.460 [2024-12-09 18:14:59.442753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.460 [2024-12-09 18:14:59.442969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.461 [2024-12-09 18:14:59.442987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.461 [2024-12-09 18:14:59.442999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.461 [2024-12-09 18:14:59.443010] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.461 [2024-12-09 18:14:59.455182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.461 [2024-12-09 18:14:59.455568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.461 [2024-12-09 18:14:59.455597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.461 [2024-12-09 18:14:59.455612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.461 [2024-12-09 18:14:59.455849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.461 [2024-12-09 18:14:59.456059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.461 [2024-12-09 18:14:59.456077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.461 [2024-12-09 18:14:59.456089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.461 [2024-12-09 18:14:59.456100] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.461 [2024-12-09 18:14:59.468446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.461 [2024-12-09 18:14:59.468845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.461 [2024-12-09 18:14:59.468888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.461 [2024-12-09 18:14:59.468903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.461 [2024-12-09 18:14:59.469166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.461 [2024-12-09 18:14:59.469361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.461 [2024-12-09 18:14:59.469379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.461 [2024-12-09 18:14:59.469391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.461 [2024-12-09 18:14:59.469402] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.461 [2024-12-09 18:14:59.481737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.461 [2024-12-09 18:14:59.482156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.461 [2024-12-09 18:14:59.482184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.461 [2024-12-09 18:14:59.482200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.461 [2024-12-09 18:14:59.482429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.461 [2024-12-09 18:14:59.482704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.461 [2024-12-09 18:14:59.482726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.461 [2024-12-09 18:14:59.482740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.461 [2024-12-09 18:14:59.482752] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.775 [2024-12-09 18:14:59.495256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.775 [2024-12-09 18:14:59.495685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-12-09 18:14:59.495730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.775 [2024-12-09 18:14:59.495748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.775 [2024-12-09 18:14:59.496005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.775 [2024-12-09 18:14:59.496216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.775 [2024-12-09 18:14:59.496236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.775 [2024-12-09 18:14:59.496248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.775 [2024-12-09 18:14:59.496260] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.775 [2024-12-09 18:14:59.508601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.775 [2024-12-09 18:14:59.509023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-12-09 18:14:59.509052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.775 [2024-12-09 18:14:59.509068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.775 [2024-12-09 18:14:59.509305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.775 [2024-12-09 18:14:59.509516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.775 [2024-12-09 18:14:59.509557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.775 [2024-12-09 18:14:59.509571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.775 [2024-12-09 18:14:59.509583] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.775 [2024-12-09 18:14:59.521755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.775 [2024-12-09 18:14:59.522249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-12-09 18:14:59.522276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.775 [2024-12-09 18:14:59.522307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.775 [2024-12-09 18:14:59.522568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.775 [2024-12-09 18:14:59.522769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.775 [2024-12-09 18:14:59.522792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.775 [2024-12-09 18:14:59.522805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.775 [2024-12-09 18:14:59.522816] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.775 [2024-12-09 18:14:59.534932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.775 [2024-12-09 18:14:59.535273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-12-09 18:14:59.535349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.775 [2024-12-09 18:14:59.535364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.775 [2024-12-09 18:14:59.535614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.775 [2024-12-09 18:14:59.535836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.775 [2024-12-09 18:14:59.535855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.775 [2024-12-09 18:14:59.535868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.775 [2024-12-09 18:14:59.535879] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.775 [2024-12-09 18:14:59.548057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.775 [2024-12-09 18:14:59.548420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.775 [2024-12-09 18:14:59.548446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.775 [2024-12-09 18:14:59.548461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.775 [2024-12-09 18:14:59.548729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.775 [2024-12-09 18:14:59.548993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.776 [2024-12-09 18:14:59.549012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.776 [2024-12-09 18:14:59.549024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.776 [2024-12-09 18:14:59.549034] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.776 [2024-12-09 18:14:59.561162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.776 [2024-12-09 18:14:59.561481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-12-09 18:14:59.561507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.776 [2024-12-09 18:14:59.561522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.776 [2024-12-09 18:14:59.561789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.776 [2024-12-09 18:14:59.562022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.776 [2024-12-09 18:14:59.562040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.776 [2024-12-09 18:14:59.562052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.776 [2024-12-09 18:14:59.562063] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.776 [2024-12-09 18:14:59.574185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.776 [2024-12-09 18:14:59.574554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-12-09 18:14:59.574581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.776 [2024-12-09 18:14:59.574596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.776 [2024-12-09 18:14:59.574812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.776 [2024-12-09 18:14:59.575021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.776 [2024-12-09 18:14:59.575038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.776 [2024-12-09 18:14:59.575050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.776 [2024-12-09 18:14:59.575061] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.776 [2024-12-09 18:14:59.587300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.776 [2024-12-09 18:14:59.587793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-12-09 18:14:59.587834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.776 [2024-12-09 18:14:59.587851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.776 [2024-12-09 18:14:59.588104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.776 [2024-12-09 18:14:59.588314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.776 [2024-12-09 18:14:59.588331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.776 [2024-12-09 18:14:59.588343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.776 [2024-12-09 18:14:59.588353] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.776 [2024-12-09 18:14:59.600579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.776 [2024-12-09 18:14:59.601009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-12-09 18:14:59.601036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.776 [2024-12-09 18:14:59.601068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.776 [2024-12-09 18:14:59.601309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.776 [2024-12-09 18:14:59.601519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.776 [2024-12-09 18:14:59.601537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.776 [2024-12-09 18:14:59.601575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.776 [2024-12-09 18:14:59.601587] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.776 [2024-12-09 18:14:59.613762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.776 [2024-12-09 18:14:59.614210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-12-09 18:14:59.614242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.776 [2024-12-09 18:14:59.614258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.776 [2024-12-09 18:14:59.614495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.776 [2024-12-09 18:14:59.614736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.776 [2024-12-09 18:14:59.614756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.776 [2024-12-09 18:14:59.614769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.776 [2024-12-09 18:14:59.614780] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.776 [2024-12-09 18:14:59.626772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.776 [2024-12-09 18:14:59.627139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.776 [2024-12-09 18:14:59.627182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.776 [2024-12-09 18:14:59.627199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.776 [2024-12-09 18:14:59.627468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.776 [2024-12-09 18:14:59.627710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.776 [2024-12-09 18:14:59.627731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.776 [2024-12-09 18:14:59.627744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.776 [2024-12-09 18:14:59.627756] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.776 [2024-12-09 18:14:59.639949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.776 [2024-12-09 18:14:59.640283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-12-09 18:14:59.640309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.777 [2024-12-09 18:14:59.640324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.777 [2024-12-09 18:14:59.640526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.777 [2024-12-09 18:14:59.640742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.777 [2024-12-09 18:14:59.640762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.777 [2024-12-09 18:14:59.640774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.777 [2024-12-09 18:14:59.640786] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.777 [2024-12-09 18:14:59.653090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.777 [2024-12-09 18:14:59.653427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-12-09 18:14:59.653455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.777 [2024-12-09 18:14:59.653471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.777 [2024-12-09 18:14:59.653733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.777 [2024-12-09 18:14:59.653966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.777 [2024-12-09 18:14:59.653984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.777 [2024-12-09 18:14:59.653996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.777 [2024-12-09 18:14:59.654007] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.777 [2024-12-09 18:14:59.666294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.777 [2024-12-09 18:14:59.666659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-12-09 18:14:59.666686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.777 [2024-12-09 18:14:59.666702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.777 [2024-12-09 18:14:59.666940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.777 [2024-12-09 18:14:59.667149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.777 [2024-12-09 18:14:59.667168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.777 [2024-12-09 18:14:59.667179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.777 [2024-12-09 18:14:59.667190] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.777 [2024-12-09 18:14:59.679320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.777 [2024-12-09 18:14:59.679712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-12-09 18:14:59.679741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.777 [2024-12-09 18:14:59.679756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.777 [2024-12-09 18:14:59.679987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.777 [2024-12-09 18:14:59.680223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.777 [2024-12-09 18:14:59.680257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.777 [2024-12-09 18:14:59.680270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.777 [2024-12-09 18:14:59.680282] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.777 [2024-12-09 18:14:59.692605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.777 [2024-12-09 18:14:59.692995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-12-09 18:14:59.693021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.777 [2024-12-09 18:14:59.693037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.777 [2024-12-09 18:14:59.693273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.777 [2024-12-09 18:14:59.693484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.777 [2024-12-09 18:14:59.693506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.777 [2024-12-09 18:14:59.693519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.777 [2024-12-09 18:14:59.693530] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.777 [2024-12-09 18:14:59.705702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.777 [2024-12-09 18:14:59.706067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-12-09 18:14:59.706109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.777 [2024-12-09 18:14:59.706125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.777 [2024-12-09 18:14:59.706379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.777 [2024-12-09 18:14:59.706614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.777 [2024-12-09 18:14:59.706634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.777 [2024-12-09 18:14:59.706646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.777 [2024-12-09 18:14:59.706657] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.777 [2024-12-09 18:14:59.718734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.777 [2024-12-09 18:14:59.719100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.777 [2024-12-09 18:14:59.719127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.777 [2024-12-09 18:14:59.719142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.777 [2024-12-09 18:14:59.719382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.777 [2024-12-09 18:14:59.719636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.777 [2024-12-09 18:14:59.719656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.778 [2024-12-09 18:14:59.719669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.778 [2024-12-09 18:14:59.719681] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.778 [2024-12-09 18:14:59.731822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.778 [2024-12-09 18:14:59.732251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-12-09 18:14:59.732292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.778 [2024-12-09 18:14:59.732308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.778 [2024-12-09 18:14:59.732559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.778 [2024-12-09 18:14:59.732782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.778 [2024-12-09 18:14:59.732801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.778 [2024-12-09 18:14:59.732813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.778 [2024-12-09 18:14:59.732824] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.778 [2024-12-09 18:14:59.744908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.778 [2024-12-09 18:14:59.745302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-12-09 18:14:59.745328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.778 [2024-12-09 18:14:59.745343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.778 [2024-12-09 18:14:59.745577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.778 [2024-12-09 18:14:59.745837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.778 [2024-12-09 18:14:59.745857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.778 [2024-12-09 18:14:59.745869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.778 [2024-12-09 18:14:59.745881] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.778 [2024-12-09 18:14:59.757979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.778 [2024-12-09 18:14:59.758320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-12-09 18:14:59.758348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.778 [2024-12-09 18:14:59.758363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.778 [2024-12-09 18:14:59.758599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.778 [2024-12-09 18:14:59.758806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.778 [2024-12-09 18:14:59.758825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.778 [2024-12-09 18:14:59.758837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.778 [2024-12-09 18:14:59.758848] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.778 [2024-12-09 18:14:59.771120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.778 [2024-12-09 18:14:59.771483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-12-09 18:14:59.771510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.778 [2024-12-09 18:14:59.771525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.778 [2024-12-09 18:14:59.771755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.778 [2024-12-09 18:14:59.771986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.778 [2024-12-09 18:14:59.772004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.778 [2024-12-09 18:14:59.772016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.778 [2024-12-09 18:14:59.772027] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:36.778 [2024-12-09 18:14:59.784555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:36.778 [2024-12-09 18:14:59.784936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.778 [2024-12-09 18:14:59.784971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:36.778 [2024-12-09 18:14:59.784988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:36.778 [2024-12-09 18:14:59.785223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:36.778 [2024-12-09 18:14:59.785447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:36.778 [2024-12-09 18:14:59.785468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:36.778 [2024-12-09 18:14:59.785480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:36.778 [2024-12-09 18:14:59.785491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.067 [2024-12-09 18:14:59.797941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.067 [2024-12-09 18:14:59.798341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-12-09 18:14:59.798387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.067 [2024-12-09 18:14:59.798405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.067 [2024-12-09 18:14:59.798667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.067 [2024-12-09 18:14:59.798921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.067 [2024-12-09 18:14:59.798943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.067 [2024-12-09 18:14:59.798966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.067 [2024-12-09 18:14:59.798980] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.067 5207.25 IOPS, 20.34 MiB/s [2024-12-09T17:15:00.108Z] [2024-12-09 18:14:59.811230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.067 [2024-12-09 18:14:59.811609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-12-09 18:14:59.811637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.067 [2024-12-09 18:14:59.811653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.067 [2024-12-09 18:14:59.811893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.067 [2024-12-09 18:14:59.812102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.067 [2024-12-09 18:14:59.812120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.067 [2024-12-09 18:14:59.812132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.067 [2024-12-09 18:14:59.812143] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.067 [2024-12-09 18:14:59.824605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.067 [2024-12-09 18:14:59.825035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-12-09 18:14:59.825082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.067 [2024-12-09 18:14:59.825101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.067 [2024-12-09 18:14:59.825368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.067 [2024-12-09 18:14:59.825610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.067 [2024-12-09 18:14:59.825632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.067 [2024-12-09 18:14:59.825649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.067 [2024-12-09 18:14:59.825668] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.067 [2024-12-09 18:14:59.837699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.067 [2024-12-09 18:14:59.838083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-12-09 18:14:59.838124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.067 [2024-12-09 18:14:59.838139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.067 [2024-12-09 18:14:59.838371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.067 [2024-12-09 18:14:59.838626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.067 [2024-12-09 18:14:59.838647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.067 [2024-12-09 18:14:59.838659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.067 [2024-12-09 18:14:59.838671] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.067 [2024-12-09 18:14:59.850800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.068 [2024-12-09 18:14:59.851155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-12-09 18:14:59.851197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.068 [2024-12-09 18:14:59.851213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.068 [2024-12-09 18:14:59.851480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.068 [2024-12-09 18:14:59.851708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.068 [2024-12-09 18:14:59.851728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.068 [2024-12-09 18:14:59.851740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.068 [2024-12-09 18:14:59.851752] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.068 [2024-12-09 18:14:59.864085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.068 [2024-12-09 18:14:59.864463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-12-09 18:14:59.864505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.068 [2024-12-09 18:14:59.864521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.068 [2024-12-09 18:14:59.864777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.068 [2024-12-09 18:14:59.865023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.068 [2024-12-09 18:14:59.865046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.068 [2024-12-09 18:14:59.865059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.068 [2024-12-09 18:14:59.865070] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.068 [2024-12-09 18:14:59.877134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.068 [2024-12-09 18:14:59.877501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-12-09 18:14:59.877542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.068 [2024-12-09 18:14:59.877566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.068 [2024-12-09 18:14:59.877810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.068 [2024-12-09 18:14:59.878038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.068 [2024-12-09 18:14:59.878056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.068 [2024-12-09 18:14:59.878068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.068 [2024-12-09 18:14:59.878079] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.068 [2024-12-09 18:14:59.890295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.068 [2024-12-09 18:14:59.890627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-12-09 18:14:59.890655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.068 [2024-12-09 18:14:59.890670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.068 [2024-12-09 18:14:59.890894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.068 [2024-12-09 18:14:59.891104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.068 [2024-12-09 18:14:59.891122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.068 [2024-12-09 18:14:59.891134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.068 [2024-12-09 18:14:59.891144] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.068 [2024-12-09 18:14:59.903425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.068 [2024-12-09 18:14:59.903858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-12-09 18:14:59.903886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.068 [2024-12-09 18:14:59.903902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.068 [2024-12-09 18:14:59.904139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.068 [2024-12-09 18:14:59.904349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.068 [2024-12-09 18:14:59.904367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.068 [2024-12-09 18:14:59.904379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.068 [2024-12-09 18:14:59.904394] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.068 [2024-12-09 18:14:59.916592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.068 [2024-12-09 18:14:59.916972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-12-09 18:14:59.917012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.068 [2024-12-09 18:14:59.917027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.068 [2024-12-09 18:14:59.917257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.068 [2024-12-09 18:14:59.917466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.068 [2024-12-09 18:14:59.917485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.068 [2024-12-09 18:14:59.917496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.068 [2024-12-09 18:14:59.917507] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.068 [2024-12-09 18:14:59.929752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.068 [2024-12-09 18:14:59.930156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-12-09 18:14:59.930198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.068 [2024-12-09 18:14:59.930213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.068 [2024-12-09 18:14:59.930468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.068 [2024-12-09 18:14:59.930722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.068 [2024-12-09 18:14:59.930744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.068 [2024-12-09 18:14:59.930758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.068 [2024-12-09 18:14:59.930770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.068 [2024-12-09 18:14:59.943013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.068 [2024-12-09 18:14:59.943378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-12-09 18:14:59.943405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.068 [2024-12-09 18:14:59.943421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.068 [2024-12-09 18:14:59.943687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.068 [2024-12-09 18:14:59.943905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.068 [2024-12-09 18:14:59.943923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.068 [2024-12-09 18:14:59.943935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.068 [2024-12-09 18:14:59.943946] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.068 [2024-12-09 18:14:59.956099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.068 [2024-12-09 18:14:59.956495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-12-09 18:14:59.956527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.068 [2024-12-09 18:14:59.956550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.068 [2024-12-09 18:14:59.956783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.068 [2024-12-09 18:14:59.957015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.068 [2024-12-09 18:14:59.957034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.068 [2024-12-09 18:14:59.957045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.068 [2024-12-09 18:14:59.957056] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.068 [2024-12-09 18:14:59.969164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.068 [2024-12-09 18:14:59.969490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-12-09 18:14:59.969516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.068 [2024-12-09 18:14:59.969531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.068 [2024-12-09 18:14:59.969799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.069 [2024-12-09 18:14:59.970032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.069 [2024-12-09 18:14:59.970051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.069 [2024-12-09 18:14:59.970063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.069 [2024-12-09 18:14:59.970073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.069 [2024-12-09 18:14:59.982238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.069 [2024-12-09 18:14:59.982570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-12-09 18:14:59.982596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.069 [2024-12-09 18:14:59.982611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.069 [2024-12-09 18:14:59.982828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.069 [2024-12-09 18:14:59.983038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.069 [2024-12-09 18:14:59.983056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.069 [2024-12-09 18:14:59.983068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.069 [2024-12-09 18:14:59.983079] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.069 [2024-12-09 18:14:59.995372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.069 [2024-12-09 18:14:59.995794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-12-09 18:14:59.995835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.069 [2024-12-09 18:14:59.995851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.069 [2024-12-09 18:14:59.996092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.069 [2024-12-09 18:14:59.996286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.069 [2024-12-09 18:14:59.996304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.069 [2024-12-09 18:14:59.996315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.069 [2024-12-09 18:14:59.996326] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.069 [2024-12-09 18:15:00.008961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.069 [2024-12-09 18:15:00.009296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-12-09 18:15:00.009325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.069 [2024-12-09 18:15:00.009342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.069 [2024-12-09 18:15:00.009570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.069 [2024-12-09 18:15:00.009802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.069 [2024-12-09 18:15:00.009822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.069 [2024-12-09 18:15:00.009836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.069 [2024-12-09 18:15:00.009863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.069 [2024-12-09 18:15:00.022720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.069 [2024-12-09 18:15:00.023183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-12-09 18:15:00.023216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.069 [2024-12-09 18:15:00.023236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.069 [2024-12-09 18:15:00.023497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.069 [2024-12-09 18:15:00.023745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.069 [2024-12-09 18:15:00.023768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.069 [2024-12-09 18:15:00.023783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.069 [2024-12-09 18:15:00.023805] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.069 [2024-12-09 18:15:00.036509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.069 [2024-12-09 18:15:00.036947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-12-09 18:15:00.036976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.069 [2024-12-09 18:15:00.036992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.069 [2024-12-09 18:15:00.037225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.069 [2024-12-09 18:15:00.037443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.069 [2024-12-09 18:15:00.037471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.069 [2024-12-09 18:15:00.037500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.069 [2024-12-09 18:15:00.037512] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.069 [2024-12-09 18:15:00.050015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.069 [2024-12-09 18:15:00.050387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-12-09 18:15:00.050421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.069 [2024-12-09 18:15:00.050437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.069 [2024-12-09 18:15:00.050677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.069 [2024-12-09 18:15:00.050925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.069 [2024-12-09 18:15:00.050945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.069 [2024-12-09 18:15:00.050957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.069 [2024-12-09 18:15:00.050968] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.069 [2024-12-09 18:15:00.063436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.069 [2024-12-09 18:15:00.063834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-12-09 18:15:00.063863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.069 [2024-12-09 18:15:00.063880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.069 [2024-12-09 18:15:00.064111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.069 [2024-12-09 18:15:00.064327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.069 [2024-12-09 18:15:00.064345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.069 [2024-12-09 18:15:00.064358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.069 [2024-12-09 18:15:00.064370] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.069 [2024-12-09 18:15:00.077170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.069 [2024-12-09 18:15:00.077634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-12-09 18:15:00.077665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.069 [2024-12-09 18:15:00.077682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.069 [2024-12-09 18:15:00.077914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.069 [2024-12-09 18:15:00.078138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.069 [2024-12-09 18:15:00.078170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.069 [2024-12-09 18:15:00.078182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.069 [2024-12-09 18:15:00.078193] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.069 [2024-12-09 18:15:00.090634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.069 [2024-12-09 18:15:00.091066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-12-09 18:15:00.091096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.069 [2024-12-09 18:15:00.091112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.069 [2024-12-09 18:15:00.091357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.069 [2024-12-09 18:15:00.091609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.069 [2024-12-09 18:15:00.091631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.070 [2024-12-09 18:15:00.091643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.070 [2024-12-09 18:15:00.091654] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.070 [2024-12-09 18:15:00.104568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.070 [2024-12-09 18:15:00.105089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-12-09 18:15:00.105132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.070 [2024-12-09 18:15:00.105150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.070 [2024-12-09 18:15:00.105401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.070 [2024-12-09 18:15:00.105637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.070 [2024-12-09 18:15:00.105661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.070 [2024-12-09 18:15:00.105675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.070 [2024-12-09 18:15:00.105687] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.329 [2024-12-09 18:15:00.118353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.329 [2024-12-09 18:15:00.118721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.329 [2024-12-09 18:15:00.118751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.329 [2024-12-09 18:15:00.118767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.329 [2024-12-09 18:15:00.118998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.329 [2024-12-09 18:15:00.119239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.329 [2024-12-09 18:15:00.119264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.329 [2024-12-09 18:15:00.119294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.329 [2024-12-09 18:15:00.119307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.329 [2024-12-09 18:15:00.131735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.329 [2024-12-09 18:15:00.132122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.329 [2024-12-09 18:15:00.132168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.329 [2024-12-09 18:15:00.132186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.329 [2024-12-09 18:15:00.132417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.329 [2024-12-09 18:15:00.132651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.329 [2024-12-09 18:15:00.132672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.329 [2024-12-09 18:15:00.132684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.329 [2024-12-09 18:15:00.132697] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.329 [2024-12-09 18:15:00.145223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.329 [2024-12-09 18:15:00.145613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.329 [2024-12-09 18:15:00.145643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.329 [2024-12-09 18:15:00.145660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.329 [2024-12-09 18:15:00.145906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.329 [2024-12-09 18:15:00.146142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.329 [2024-12-09 18:15:00.146162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.329 [2024-12-09 18:15:00.146175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.329 [2024-12-09 18:15:00.146188] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.329 [2024-12-09 18:15:00.158735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.329 [2024-12-09 18:15:00.159112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.329 [2024-12-09 18:15:00.159156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.329 [2024-12-09 18:15:00.159173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.329 [2024-12-09 18:15:00.159402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.329 [2024-12-09 18:15:00.159642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.329 [2024-12-09 18:15:00.159663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.329 [2024-12-09 18:15:00.159675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.329 [2024-12-09 18:15:00.159686] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.329 [2024-12-09 18:15:00.172132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.329 [2024-12-09 18:15:00.172504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.329 [2024-12-09 18:15:00.172554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.329 [2024-12-09 18:15:00.172604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.329 [2024-12-09 18:15:00.172850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.329 [2024-12-09 18:15:00.173059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.329 [2024-12-09 18:15:00.173079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.329 [2024-12-09 18:15:00.173092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.329 [2024-12-09 18:15:00.173104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.329 [2024-12-09 18:15:00.185655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.329 [2024-12-09 18:15:00.186060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.329 [2024-12-09 18:15:00.186090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.329 [2024-12-09 18:15:00.186107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.329 [2024-12-09 18:15:00.186338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.330 [2024-12-09 18:15:00.186623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.330 [2024-12-09 18:15:00.186646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.330 [2024-12-09 18:15:00.186660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.330 [2024-12-09 18:15:00.186673] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.330 [2024-12-09 18:15:00.199072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.330 [2024-12-09 18:15:00.199448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.330 [2024-12-09 18:15:00.199481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.330 [2024-12-09 18:15:00.199513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.330 [2024-12-09 18:15:00.199740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.330 [2024-12-09 18:15:00.199989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.330 [2024-12-09 18:15:00.200009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.330 [2024-12-09 18:15:00.200021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.330 [2024-12-09 18:15:00.200032] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.330 [2024-12-09 18:15:00.212659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.330 [2024-12-09 18:15:00.213019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.330 [2024-12-09 18:15:00.213048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.330 [2024-12-09 18:15:00.213065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.330 [2024-12-09 18:15:00.213297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.330 [2024-12-09 18:15:00.213508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.330 [2024-12-09 18:15:00.213566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.330 [2024-12-09 18:15:00.213596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.330 [2024-12-09 18:15:00.213609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.330 [2024-12-09 18:15:00.226196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.330 [2024-12-09 18:15:00.226679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.330 [2024-12-09 18:15:00.226729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.330 [2024-12-09 18:15:00.226746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.330 [2024-12-09 18:15:00.226980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.330 [2024-12-09 18:15:00.227190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.330 [2024-12-09 18:15:00.227209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.330 [2024-12-09 18:15:00.227220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.330 [2024-12-09 18:15:00.227231] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.330 [2024-12-09 18:15:00.239744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.330 [2024-12-09 18:15:00.240162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.330 [2024-12-09 18:15:00.240204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.330 [2024-12-09 18:15:00.240219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.330 [2024-12-09 18:15:00.240468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.330 [2024-12-09 18:15:00.240722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.330 [2024-12-09 18:15:00.240744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.330 [2024-12-09 18:15:00.240758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.330 [2024-12-09 18:15:00.240770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.330 [2024-12-09 18:15:00.253127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.330 [2024-12-09 18:15:00.253529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.330 [2024-12-09 18:15:00.253566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.330 [2024-12-09 18:15:00.253612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.330 [2024-12-09 18:15:00.253855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.330 [2024-12-09 18:15:00.254071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.330 [2024-12-09 18:15:00.254090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.330 [2024-12-09 18:15:00.254102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.330 [2024-12-09 18:15:00.254129] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.330 [2024-12-09 18:15:00.266710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.330 [2024-12-09 18:15:00.267168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.330 [2024-12-09 18:15:00.267200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.330 [2024-12-09 18:15:00.267231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.330 [2024-12-09 18:15:00.267462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.330 [2024-12-09 18:15:00.267708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.330 [2024-12-09 18:15:00.267728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.330 [2024-12-09 18:15:00.267741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.330 [2024-12-09 18:15:00.267753] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.330 [2024-12-09 18:15:00.279948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.330 [2024-12-09 18:15:00.280397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.330 [2024-12-09 18:15:00.280439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.330 [2024-12-09 18:15:00.280455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.330 [2024-12-09 18:15:00.280708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.330 [2024-12-09 18:15:00.280948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.330 [2024-12-09 18:15:00.280992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.330 [2024-12-09 18:15:00.281005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.330 [2024-12-09 18:15:00.281017] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.330 [2024-12-09 18:15:00.293377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.330 [2024-12-09 18:15:00.293810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.330 [2024-12-09 18:15:00.293838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.330 [2024-12-09 18:15:00.293854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.330 [2024-12-09 18:15:00.294086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.330 [2024-12-09 18:15:00.294306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.330 [2024-12-09 18:15:00.294325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.330 [2024-12-09 18:15:00.294338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.330 [2024-12-09 18:15:00.294349] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.330 [2024-12-09 18:15:00.306681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.330 [2024-12-09 18:15:00.307105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.330 [2024-12-09 18:15:00.307142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.330 [2024-12-09 18:15:00.307159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.330 [2024-12-09 18:15:00.307403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.330 [2024-12-09 18:15:00.307663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.330 [2024-12-09 18:15:00.307685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.330 [2024-12-09 18:15:00.307698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.330 [2024-12-09 18:15:00.307710] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.330 [2024-12-09 18:15:00.320282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.330 [2024-12-09 18:15:00.320665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.330 [2024-12-09 18:15:00.320694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.330 [2024-12-09 18:15:00.320710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.330 [2024-12-09 18:15:00.320953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.330 [2024-12-09 18:15:00.321147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.330 [2024-12-09 18:15:00.321165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.331 [2024-12-09 18:15:00.321177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.331 [2024-12-09 18:15:00.321188] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.331 [2024-12-09 18:15:00.333881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.331 [2024-12-09 18:15:00.334231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.331 [2024-12-09 18:15:00.334259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.331 [2024-12-09 18:15:00.334274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.331 [2024-12-09 18:15:00.334533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.331 [2024-12-09 18:15:00.334780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.331 [2024-12-09 18:15:00.334799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.331 [2024-12-09 18:15:00.334811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.331 [2024-12-09 18:15:00.334822] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.331 [2024-12-09 18:15:00.347401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.331 [2024-12-09 18:15:00.347886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.331 [2024-12-09 18:15:00.347928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.331 [2024-12-09 18:15:00.347945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.331 [2024-12-09 18:15:00.348191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.331 [2024-12-09 18:15:00.348437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.331 [2024-12-09 18:15:00.348456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.331 [2024-12-09 18:15:00.348469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.331 [2024-12-09 18:15:00.348480] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.331 [2024-12-09 18:15:00.360771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.331 [2024-12-09 18:15:00.361173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.331 [2024-12-09 18:15:00.361220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.331 [2024-12-09 18:15:00.361235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.331 [2024-12-09 18:15:00.361498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.331 [2024-12-09 18:15:00.361724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.331 [2024-12-09 18:15:00.361744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.331 [2024-12-09 18:15:00.361756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.331 [2024-12-09 18:15:00.361767] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.590 [2024-12-09 18:15:00.374125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.590 [2024-12-09 18:15:00.374526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.590 [2024-12-09 18:15:00.374564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.590 [2024-12-09 18:15:00.374582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.590 [2024-12-09 18:15:00.374812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.590 [2024-12-09 18:15:00.375045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.590 [2024-12-09 18:15:00.375063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.590 [2024-12-09 18:15:00.375075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.590 [2024-12-09 18:15:00.375086] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.590 [2024-12-09 18:15:00.387377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.590 [2024-12-09 18:15:00.387788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.590 [2024-12-09 18:15:00.387831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.590 [2024-12-09 18:15:00.387847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.590 [2024-12-09 18:15:00.388116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.590 [2024-12-09 18:15:00.388309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.590 [2024-12-09 18:15:00.388332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.591 [2024-12-09 18:15:00.388344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.591 [2024-12-09 18:15:00.388356] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.591 [2024-12-09 18:15:00.400796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.591 [2024-12-09 18:15:00.401190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.591 [2024-12-09 18:15:00.401218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.591 [2024-12-09 18:15:00.401234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.591 [2024-12-09 18:15:00.401477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.591 [2024-12-09 18:15:00.401728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.591 [2024-12-09 18:15:00.401749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.591 [2024-12-09 18:15:00.401763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.591 [2024-12-09 18:15:00.401775] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.591 [2024-12-09 18:15:00.414513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.591 [2024-12-09 18:15:00.414888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.591 [2024-12-09 18:15:00.414917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.591 [2024-12-09 18:15:00.414934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.591 [2024-12-09 18:15:00.415170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.591 [2024-12-09 18:15:00.415381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.591 [2024-12-09 18:15:00.415399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.591 [2024-12-09 18:15:00.415411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.591 [2024-12-09 18:15:00.415424] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.591 [2024-12-09 18:15:00.428041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.591 [2024-12-09 18:15:00.428459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.591 [2024-12-09 18:15:00.428510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.591 [2024-12-09 18:15:00.428525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.591 [2024-12-09 18:15:00.428766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.591 [2024-12-09 18:15:00.428999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.591 [2024-12-09 18:15:00.429018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.591 [2024-12-09 18:15:00.429030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.591 [2024-12-09 18:15:00.429041] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.591 [2024-12-09 18:15:00.441408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.591 [2024-12-09 18:15:00.441786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.591 [2024-12-09 18:15:00.441815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.591 [2024-12-09 18:15:00.441831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.591 [2024-12-09 18:15:00.442046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.591 [2024-12-09 18:15:00.442269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.591 [2024-12-09 18:15:00.442288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.591 [2024-12-09 18:15:00.442301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.591 [2024-12-09 18:15:00.442313] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.591 [2024-12-09 18:15:00.454841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.591 [2024-12-09 18:15:00.455283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.591 [2024-12-09 18:15:00.455311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.591 [2024-12-09 18:15:00.455328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.591 [2024-12-09 18:15:00.455582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.591 [2024-12-09 18:15:00.455803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.591 [2024-12-09 18:15:00.455824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.591 [2024-12-09 18:15:00.455837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.591 [2024-12-09 18:15:00.455865] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.591 [2024-12-09 18:15:00.468133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.591 [2024-12-09 18:15:00.468535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.591 [2024-12-09 18:15:00.468584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.591 [2024-12-09 18:15:00.468602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.591 [2024-12-09 18:15:00.468826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.591 [2024-12-09 18:15:00.469072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.591 [2024-12-09 18:15:00.469091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.591 [2024-12-09 18:15:00.469104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.591 [2024-12-09 18:15:00.469115] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.591 [2024-12-09 18:15:00.481431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.591 [2024-12-09 18:15:00.481804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.591 [2024-12-09 18:15:00.481849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.591 [2024-12-09 18:15:00.481864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.591 [2024-12-09 18:15:00.482126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.591 [2024-12-09 18:15:00.482339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.591 [2024-12-09 18:15:00.482358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.591 [2024-12-09 18:15:00.482370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.591 [2024-12-09 18:15:00.482380] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.591 [2024-12-09 18:15:00.494721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.591 [2024-12-09 18:15:00.495101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.591 [2024-12-09 18:15:00.495128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.591 [2024-12-09 18:15:00.495144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.591 [2024-12-09 18:15:00.495379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.591 [2024-12-09 18:15:00.495602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.591 [2024-12-09 18:15:00.495635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.591 [2024-12-09 18:15:00.495650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.591 [2024-12-09 18:15:00.495662] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.591 [2024-12-09 18:15:00.508069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.591 [2024-12-09 18:15:00.508441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.591 [2024-12-09 18:15:00.508484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.591 [2024-12-09 18:15:00.508501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.591 [2024-12-09 18:15:00.508766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.591 [2024-12-09 18:15:00.508995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.591 [2024-12-09 18:15:00.509015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.591 [2024-12-09 18:15:00.509027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.591 [2024-12-09 18:15:00.509039] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.591 [2024-12-09 18:15:00.521307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.592 [2024-12-09 18:15:00.521750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.592 [2024-12-09 18:15:00.521778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.592 [2024-12-09 18:15:00.521794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.592 [2024-12-09 18:15:00.522046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.592 [2024-12-09 18:15:00.522241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.592 [2024-12-09 18:15:00.522259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.592 [2024-12-09 18:15:00.522271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.592 [2024-12-09 18:15:00.522282] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.592 [2024-12-09 18:15:00.534691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.592 [2024-12-09 18:15:00.535145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.592 [2024-12-09 18:15:00.535187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.592 [2024-12-09 18:15:00.535203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.592 [2024-12-09 18:15:00.535446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.592 [2024-12-09 18:15:00.535694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.592 [2024-12-09 18:15:00.535715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.592 [2024-12-09 18:15:00.535728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.592 [2024-12-09 18:15:00.535739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.592 [2024-12-09 18:15:00.548049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.592 [2024-12-09 18:15:00.548439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.592 [2024-12-09 18:15:00.548466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.592 [2024-12-09 18:15:00.548481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.592 [2024-12-09 18:15:00.548738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.592 [2024-12-09 18:15:00.548986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.592 [2024-12-09 18:15:00.549004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.592 [2024-12-09 18:15:00.549016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.592 [2024-12-09 18:15:00.549027] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.592 [2024-12-09 18:15:00.561295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.592 [2024-12-09 18:15:00.561681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.592 [2024-12-09 18:15:00.561723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.592 [2024-12-09 18:15:00.561739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.592 [2024-12-09 18:15:00.561993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.592 [2024-12-09 18:15:00.562187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.592 [2024-12-09 18:15:00.562209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.592 [2024-12-09 18:15:00.562222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.592 [2024-12-09 18:15:00.562233] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.592 [2024-12-09 18:15:00.574487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.592 [2024-12-09 18:15:00.575089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.592 [2024-12-09 18:15:00.575131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.592 [2024-12-09 18:15:00.575148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.592 [2024-12-09 18:15:00.575389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.592 [2024-12-09 18:15:00.575624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.592 [2024-12-09 18:15:00.575644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.592 [2024-12-09 18:15:00.575657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.592 [2024-12-09 18:15:00.575668] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.592 [2024-12-09 18:15:00.587771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.592 [2024-12-09 18:15:00.588171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.592 [2024-12-09 18:15:00.588199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.592 [2024-12-09 18:15:00.588215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.592 [2024-12-09 18:15:00.588457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.592 [2024-12-09 18:15:00.588715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.592 [2024-12-09 18:15:00.588745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.592 [2024-12-09 18:15:00.588758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.592 [2024-12-09 18:15:00.588771] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.592 [2024-12-09 18:15:00.601076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.592 [2024-12-09 18:15:00.601542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.592 [2024-12-09 18:15:00.601592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.592 [2024-12-09 18:15:00.601608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.592 [2024-12-09 18:15:00.601824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.592 [2024-12-09 18:15:00.602073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.592 [2024-12-09 18:15:00.602091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.592 [2024-12-09 18:15:00.602103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.592 [2024-12-09 18:15:00.602114] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.592 [2024-12-09 18:15:00.614410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.592 [2024-12-09 18:15:00.614787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.592 [2024-12-09 18:15:00.614815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.592 [2024-12-09 18:15:00.614831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.592 [2024-12-09 18:15:00.615075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.592 [2024-12-09 18:15:00.615269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.592 [2024-12-09 18:15:00.615288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.592 [2024-12-09 18:15:00.615300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.592 [2024-12-09 18:15:00.615311] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.592 [2024-12-09 18:15:00.627775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.592 [2024-12-09 18:15:00.628192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.592 [2024-12-09 18:15:00.628220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.592 [2024-12-09 18:15:00.628236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.592 [2024-12-09 18:15:00.628492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.852 [2024-12-09 18:15:00.628742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.852 [2024-12-09 18:15:00.628764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.852 [2024-12-09 18:15:00.628779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.852 [2024-12-09 18:15:00.628791] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.852 [2024-12-09 18:15:00.641164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.852 [2024-12-09 18:15:00.641536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.852 [2024-12-09 18:15:00.641594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.852 [2024-12-09 18:15:00.641611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.852 [2024-12-09 18:15:00.641842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.852 [2024-12-09 18:15:00.642069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.852 [2024-12-09 18:15:00.642087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.852 [2024-12-09 18:15:00.642099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.852 [2024-12-09 18:15:00.642110] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.852 [2024-12-09 18:15:00.654463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.852 [2024-12-09 18:15:00.654840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.852 [2024-12-09 18:15:00.654887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.852 [2024-12-09 18:15:00.654929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.852 [2024-12-09 18:15:00.655160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.852 [2024-12-09 18:15:00.655354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.852 [2024-12-09 18:15:00.655372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.852 [2024-12-09 18:15:00.655384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.852 [2024-12-09 18:15:00.655395] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.852 [2024-12-09 18:15:00.667910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.852 [2024-12-09 18:15:00.668326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.852 [2024-12-09 18:15:00.668377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.852 [2024-12-09 18:15:00.668393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.852 [2024-12-09 18:15:00.668655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.852 [2024-12-09 18:15:00.668909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.852 [2024-12-09 18:15:00.668928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.852 [2024-12-09 18:15:00.668941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.852 [2024-12-09 18:15:00.668952] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.852 [2024-12-09 18:15:00.681287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.852 [2024-12-09 18:15:00.681802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.852 [2024-12-09 18:15:00.681859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.852 [2024-12-09 18:15:00.681881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.852 [2024-12-09 18:15:00.682128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.852 [2024-12-09 18:15:00.682322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.852 [2024-12-09 18:15:00.682340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.852 [2024-12-09 18:15:00.682353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.852 [2024-12-09 18:15:00.682364] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.852 [2024-12-09 18:15:00.694705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.852 [2024-12-09 18:15:00.695050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.852 [2024-12-09 18:15:00.695092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.852 [2024-12-09 18:15:00.695108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.852 [2024-12-09 18:15:00.695345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.852 [2024-12-09 18:15:00.695595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.852 [2024-12-09 18:15:00.695616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.852 [2024-12-09 18:15:00.695645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.852 [2024-12-09 18:15:00.695658] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.852 [2024-12-09 18:15:00.708148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.852 [2024-12-09 18:15:00.708482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.852 [2024-12-09 18:15:00.708565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.852 [2024-12-09 18:15:00.708584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.852 [2024-12-09 18:15:00.708813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.852 [2024-12-09 18:15:00.709046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.852 [2024-12-09 18:15:00.709065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.852 [2024-12-09 18:15:00.709077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.852 [2024-12-09 18:15:00.709089] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.852 [2024-12-09 18:15:00.721455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.852 [2024-12-09 18:15:00.721951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.852 [2024-12-09 18:15:00.721980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.852 [2024-12-09 18:15:00.721996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.852 [2024-12-09 18:15:00.722229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.852 [2024-12-09 18:15:00.722449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.852 [2024-12-09 18:15:00.722467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.852 [2024-12-09 18:15:00.722479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.852 [2024-12-09 18:15:00.722490] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.852 [2024-12-09 18:15:00.734929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.853 [2024-12-09 18:15:00.735416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.853 [2024-12-09 18:15:00.735459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.853 [2024-12-09 18:15:00.735475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.853 [2024-12-09 18:15:00.735730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.853 [2024-12-09 18:15:00.735962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.853 [2024-12-09 18:15:00.735991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.853 [2024-12-09 18:15:00.736004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.853 [2024-12-09 18:15:00.736015] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.853 [2024-12-09 18:15:00.748306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.853 [2024-12-09 18:15:00.748721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.853 [2024-12-09 18:15:00.748764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.853 [2024-12-09 18:15:00.748779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.853 [2024-12-09 18:15:00.749048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.853 [2024-12-09 18:15:00.749247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.853 [2024-12-09 18:15:00.749266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.853 [2024-12-09 18:15:00.749278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.853 [2024-12-09 18:15:00.749289] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.853 [2024-12-09 18:15:00.761571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.853 [2024-12-09 18:15:00.761978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.853 [2024-12-09 18:15:00.762019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.853 [2024-12-09 18:15:00.762034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.853 [2024-12-09 18:15:00.762283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.853 [2024-12-09 18:15:00.762493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.853 [2024-12-09 18:15:00.762511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.853 [2024-12-09 18:15:00.762523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.853 [2024-12-09 18:15:00.762534] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.853 [2024-12-09 18:15:00.774821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.853 [2024-12-09 18:15:00.775294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.853 [2024-12-09 18:15:00.775337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.853 [2024-12-09 18:15:00.775353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.853 [2024-12-09 18:15:00.775634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.853 [2024-12-09 18:15:00.775873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.853 [2024-12-09 18:15:00.775893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.853 [2024-12-09 18:15:00.775905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.853 [2024-12-09 18:15:00.775932] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.853 [2024-12-09 18:15:00.788165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.853 [2024-12-09 18:15:00.788591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.853 [2024-12-09 18:15:00.788634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.853 [2024-12-09 18:15:00.788650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.853 [2024-12-09 18:15:00.788912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.853 [2024-12-09 18:15:00.789112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.853 [2024-12-09 18:15:00.789130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.853 [2024-12-09 18:15:00.789142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.853 [2024-12-09 18:15:00.789154] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.853 [2024-12-09 18:15:00.801487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.853 [2024-12-09 18:15:00.801834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.853 [2024-12-09 18:15:00.801862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.853 [2024-12-09 18:15:00.801878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.853 [2024-12-09 18:15:00.802119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.853 [2024-12-09 18:15:00.802320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.853 [2024-12-09 18:15:00.802339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.853 [2024-12-09 18:15:00.802351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.853 [2024-12-09 18:15:00.802363] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.853 4165.80 IOPS, 16.27 MiB/s [2024-12-09T17:15:00.894Z] [2024-12-09 18:15:00.814908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.853 [2024-12-09 18:15:00.815270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.853 [2024-12-09 18:15:00.815298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.853 [2024-12-09 18:15:00.815330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.853 [2024-12-09 18:15:00.815586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.853 [2024-12-09 18:15:00.815802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.853 [2024-12-09 18:15:00.815822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.853 [2024-12-09 18:15:00.815834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.853 [2024-12-09 18:15:00.815846] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.853 [2024-12-09 18:15:00.828229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.853 [2024-12-09 18:15:00.828592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.853 [2024-12-09 18:15:00.828625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.853 [2024-12-09 18:15:00.828642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.853 [2024-12-09 18:15:00.828859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.853 [2024-12-09 18:15:00.829092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.853 [2024-12-09 18:15:00.829111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.853 [2024-12-09 18:15:00.829124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.853 [2024-12-09 18:15:00.829135] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.853 [2024-12-09 18:15:00.841658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.853 [2024-12-09 18:15:00.842129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.853 [2024-12-09 18:15:00.842171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.853 [2024-12-09 18:15:00.842188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.853 [2024-12-09 18:15:00.842431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.853 [2024-12-09 18:15:00.842689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.853 [2024-12-09 18:15:00.842709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.853 [2024-12-09 18:15:00.842722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.853 [2024-12-09 18:15:00.842733] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.853 [2024-12-09 18:15:00.854939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.853 [2024-12-09 18:15:00.855373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.853 [2024-12-09 18:15:00.855399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.853 [2024-12-09 18:15:00.855429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.853 [2024-12-09 18:15:00.855670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.853 [2024-12-09 18:15:00.855914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.853 [2024-12-09 18:15:00.855933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.853 [2024-12-09 18:15:00.855945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.853 [2024-12-09 18:15:00.855956] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.853 [2024-12-09 18:15:00.868263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.853 [2024-12-09 18:15:00.868594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.854 [2024-12-09 18:15:00.868622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.854 [2024-12-09 18:15:00.868638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.854 [2024-12-09 18:15:00.868870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.854 [2024-12-09 18:15:00.869096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.854 [2024-12-09 18:15:00.869115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.854 [2024-12-09 18:15:00.869127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.854 [2024-12-09 18:15:00.869138] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.854 [2024-12-09 18:15:00.881398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.854 [2024-12-09 18:15:00.881802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.854 [2024-12-09 18:15:00.881831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:37.854 [2024-12-09 18:15:00.881847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:37.854 [2024-12-09 18:15:00.882102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:37.854 [2024-12-09 18:15:00.882295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.854 [2024-12-09 18:15:00.882314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.854 [2024-12-09 18:15:00.882326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.854 [2024-12-09 18:15:00.882337] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.114 [2024-12-09 18:15:00.894716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.114 [2024-12-09 18:15:00.895136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.114 [2024-12-09 18:15:00.895179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.114 [2024-12-09 18:15:00.895194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.114 [2024-12-09 18:15:00.895429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.114 [2024-12-09 18:15:00.895674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.114 [2024-12-09 18:15:00.895694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.114 [2024-12-09 18:15:00.895707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.114 [2024-12-09 18:15:00.895718] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.114 [2024-12-09 18:15:00.908007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.114 [2024-12-09 18:15:00.908444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.114 [2024-12-09 18:15:00.908524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.114 [2024-12-09 18:15:00.908540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.114 [2024-12-09 18:15:00.908837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.114 [2024-12-09 18:15:00.909050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.114 [2024-12-09 18:15:00.909073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.114 [2024-12-09 18:15:00.909086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.114 [2024-12-09 18:15:00.909097] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.114 [2024-12-09 18:15:00.921277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.114 [2024-12-09 18:15:00.921659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.114 [2024-12-09 18:15:00.921688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.114 [2024-12-09 18:15:00.921704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.114 [2024-12-09 18:15:00.921936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.114 [2024-12-09 18:15:00.922148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.114 [2024-12-09 18:15:00.922166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.114 [2024-12-09 18:15:00.922178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.114 [2024-12-09 18:15:00.922189] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.114 [2024-12-09 18:15:00.934667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.114 [2024-12-09 18:15:00.935106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.114 [2024-12-09 18:15:00.935160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.114 [2024-12-09 18:15:00.935174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.114 [2024-12-09 18:15:00.935418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.114 [2024-12-09 18:15:00.935630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.114 [2024-12-09 18:15:00.935649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.114 [2024-12-09 18:15:00.935661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.114 [2024-12-09 18:15:00.935672] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.114 [2024-12-09 18:15:00.947934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.114 [2024-12-09 18:15:00.948283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.114 [2024-12-09 18:15:00.948325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.114 [2024-12-09 18:15:00.948341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.114 [2024-12-09 18:15:00.948567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.114 [2024-12-09 18:15:00.948797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.114 [2024-12-09 18:15:00.948817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.114 [2024-12-09 18:15:00.948845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.114 [2024-12-09 18:15:00.948866] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.114 [2024-12-09 18:15:00.961284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.114 [2024-12-09 18:15:00.961640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.114 [2024-12-09 18:15:00.961669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.114 [2024-12-09 18:15:00.961685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.114 [2024-12-09 18:15:00.961915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.114 [2024-12-09 18:15:00.962124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.114 [2024-12-09 18:15:00.962142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.114 [2024-12-09 18:15:00.962169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.114 [2024-12-09 18:15:00.962188] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.114 [2024-12-09 18:15:00.974558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.114 [2024-12-09 18:15:00.974899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.114 [2024-12-09 18:15:00.974927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.114 [2024-12-09 18:15:00.974943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.114 [2024-12-09 18:15:00.975183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.114 [2024-12-09 18:15:00.975410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.114 [2024-12-09 18:15:00.975428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.114 [2024-12-09 18:15:00.975440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.114 [2024-12-09 18:15:00.975451] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.114 [2024-12-09 18:15:00.987814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.114 [2024-12-09 18:15:00.988279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.114 [2024-12-09 18:15:00.988321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.114 [2024-12-09 18:15:00.988336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.114 [2024-12-09 18:15:00.988615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.114 [2024-12-09 18:15:00.988845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.114 [2024-12-09 18:15:00.988864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.114 [2024-12-09 18:15:00.988877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.114 [2024-12-09 18:15:00.988888] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.114 [2024-12-09 18:15:01.001103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.114 [2024-12-09 18:15:01.001468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.114 [2024-12-09 18:15:01.001499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.114 [2024-12-09 18:15:01.001514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.114 [2024-12-09 18:15:01.001773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.114 [2024-12-09 18:15:01.001985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.114 [2024-12-09 18:15:01.002004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.114 [2024-12-09 18:15:01.002016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.114 [2024-12-09 18:15:01.002026] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.114 [2024-12-09 18:15:01.014582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.114 [2024-12-09 18:15:01.014989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.115 [2024-12-09 18:15:01.015058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.115 [2024-12-09 18:15:01.015074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.115 [2024-12-09 18:15:01.015332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.115 [2024-12-09 18:15:01.015570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.115 [2024-12-09 18:15:01.015590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.115 [2024-12-09 18:15:01.015602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.115 [2024-12-09 18:15:01.015613] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.115 [2024-12-09 18:15:01.027969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.115 [2024-12-09 18:15:01.028366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.115 [2024-12-09 18:15:01.028436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.115 [2024-12-09 18:15:01.028451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.115 [2024-12-09 18:15:01.028732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.115 [2024-12-09 18:15:01.028950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.115 [2024-12-09 18:15:01.028968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.115 [2024-12-09 18:15:01.028980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.115 [2024-12-09 18:15:01.028991] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.115 [2024-12-09 18:15:01.041330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.115 [2024-12-09 18:15:01.041714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.115 [2024-12-09 18:15:01.041743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.115 [2024-12-09 18:15:01.041758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.115 [2024-12-09 18:15:01.042006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.115 [2024-12-09 18:15:01.042239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.115 [2024-12-09 18:15:01.042258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.115 [2024-12-09 18:15:01.042271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.115 [2024-12-09 18:15:01.042282] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.115 [2024-12-09 18:15:01.054520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.115 [2024-12-09 18:15:01.054878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.115 [2024-12-09 18:15:01.054906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.115 [2024-12-09 18:15:01.054921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.115 [2024-12-09 18:15:01.055144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.115 [2024-12-09 18:15:01.055354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.115 [2024-12-09 18:15:01.055372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.115 [2024-12-09 18:15:01.055384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.115 [2024-12-09 18:15:01.055395] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.115 [2024-12-09 18:15:01.067809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.115 [2024-12-09 18:15:01.068133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.115 [2024-12-09 18:15:01.068174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.115 [2024-12-09 18:15:01.068189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.115 [2024-12-09 18:15:01.068412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.115 [2024-12-09 18:15:01.068673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.115 [2024-12-09 18:15:01.068696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.115 [2024-12-09 18:15:01.068709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.115 [2024-12-09 18:15:01.068721] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.115 [2024-12-09 18:15:01.081166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.115 [2024-12-09 18:15:01.081568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.115 [2024-12-09 18:15:01.081596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.115 [2024-12-09 18:15:01.081612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.115 [2024-12-09 18:15:01.081842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.115 [2024-12-09 18:15:01.082070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.115 [2024-12-09 18:15:01.082093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.115 [2024-12-09 18:15:01.082106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.115 [2024-12-09 18:15:01.082117] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.115 [2024-12-09 18:15:01.094457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.115 [2024-12-09 18:15:01.094900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.115 [2024-12-09 18:15:01.094952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.115 [2024-12-09 18:15:01.094968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.115 [2024-12-09 18:15:01.095230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.115 [2024-12-09 18:15:01.095424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.115 [2024-12-09 18:15:01.095443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.115 [2024-12-09 18:15:01.095455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.115 [2024-12-09 18:15:01.095466] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.115 [2024-12-09 18:15:01.107786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.115 [2024-12-09 18:15:01.108185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.115 [2024-12-09 18:15:01.108212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.115 [2024-12-09 18:15:01.108227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.115 [2024-12-09 18:15:01.108444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.115 [2024-12-09 18:15:01.108682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.115 [2024-12-09 18:15:01.108709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.115 [2024-12-09 18:15:01.108722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.115 [2024-12-09 18:15:01.108733] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.115 [2024-12-09 18:15:01.121003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.115 [2024-12-09 18:15:01.121491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.115 [2024-12-09 18:15:01.121531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.115 [2024-12-09 18:15:01.121556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.115 [2024-12-09 18:15:01.121814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.115 [2024-12-09 18:15:01.122040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.115 [2024-12-09 18:15:01.122060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.115 [2024-12-09 18:15:01.122072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.115 [2024-12-09 18:15:01.122088] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.115 [2024-12-09 18:15:01.134439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.115 [2024-12-09 18:15:01.134818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.115 [2024-12-09 18:15:01.134847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.115 [2024-12-09 18:15:01.134863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.115 [2024-12-09 18:15:01.135103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.115 [2024-12-09 18:15:01.135312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.115 [2024-12-09 18:15:01.135330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.116 [2024-12-09 18:15:01.135342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.116 [2024-12-09 18:15:01.135353] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.116 [2024-12-09 18:15:01.148107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.116 [2024-12-09 18:15:01.148488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.116 [2024-12-09 18:15:01.148518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.116 [2024-12-09 18:15:01.148535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.116 [2024-12-09 18:15:01.148770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.116 [2024-12-09 18:15:01.149013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.116 [2024-12-09 18:15:01.149033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.116 [2024-12-09 18:15:01.149045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.116 [2024-12-09 18:15:01.149058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.375 [2024-12-09 18:15:01.161666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.375 [2024-12-09 18:15:01.162057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.375 [2024-12-09 18:15:01.162087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.375 [2024-12-09 18:15:01.162104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.375 [2024-12-09 18:15:01.162337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.375 [2024-12-09 18:15:01.162611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.375 [2024-12-09 18:15:01.162632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.375 [2024-12-09 18:15:01.162646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.375 [2024-12-09 18:15:01.162659] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.375 [2024-12-09 18:15:01.175078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.375 [2024-12-09 18:15:01.175488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.375 [2024-12-09 18:15:01.175522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.375 [2024-12-09 18:15:01.175539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.375 [2024-12-09 18:15:01.175765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.375 [2024-12-09 18:15:01.176022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.375 [2024-12-09 18:15:01.176041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.375 [2024-12-09 18:15:01.176054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.375 [2024-12-09 18:15:01.176066] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.375 [2024-12-09 18:15:01.188384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.375 [2024-12-09 18:15:01.188746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.375 [2024-12-09 18:15:01.188775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.375 [2024-12-09 18:15:01.188791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.375 [2024-12-09 18:15:01.189044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.375 [2024-12-09 18:15:01.189238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.375 [2024-12-09 18:15:01.189257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.375 [2024-12-09 18:15:01.189268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.375 [2024-12-09 18:15:01.189279] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.375 [2024-12-09 18:15:01.202034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.375 [2024-12-09 18:15:01.202369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.375 [2024-12-09 18:15:01.202398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.375 [2024-12-09 18:15:01.202414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.375 [2024-12-09 18:15:01.202639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.375 [2024-12-09 18:15:01.202875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.375 [2024-12-09 18:15:01.202895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.375 [2024-12-09 18:15:01.202907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.375 [2024-12-09 18:15:01.202919] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.375 [2024-12-09 18:15:01.215644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.375 [2024-12-09 18:15:01.216046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.375 [2024-12-09 18:15:01.216073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.375 [2024-12-09 18:15:01.216089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.375 [2024-12-09 18:15:01.216328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.375 [2024-12-09 18:15:01.216576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.375 [2024-12-09 18:15:01.216611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.375 [2024-12-09 18:15:01.216625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.375 [2024-12-09 18:15:01.216638] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.375 [2024-12-09 18:15:01.229338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.375 [2024-12-09 18:15:01.229697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.375 [2024-12-09 18:15:01.229726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.375 [2024-12-09 18:15:01.229742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.375 [2024-12-09 18:15:01.229972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.376 [2024-12-09 18:15:01.230194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.376 [2024-12-09 18:15:01.230213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.376 [2024-12-09 18:15:01.230225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.376 [2024-12-09 18:15:01.230236] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.376 [2024-12-09 18:15:01.242917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.376 [2024-12-09 18:15:01.243261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.376 [2024-12-09 18:15:01.243298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.376 [2024-12-09 18:15:01.243314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.376 [2024-12-09 18:15:01.243572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.376 [2024-12-09 18:15:01.243794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.376 [2024-12-09 18:15:01.243814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.376 [2024-12-09 18:15:01.243827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.376 [2024-12-09 18:15:01.243840] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.376 [2024-12-09 18:15:01.256424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.376 [2024-12-09 18:15:01.256768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.376 [2024-12-09 18:15:01.256796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.376 [2024-12-09 18:15:01.256812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.376 [2024-12-09 18:15:01.257049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.376 [2024-12-09 18:15:01.257264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.376 [2024-12-09 18:15:01.257288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.376 [2024-12-09 18:15:01.257301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.376 [2024-12-09 18:15:01.257312] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.376 [2024-12-09 18:15:01.269924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.376 [2024-12-09 18:15:01.270296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.376 [2024-12-09 18:15:01.270339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.376 [2024-12-09 18:15:01.270355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.376 [2024-12-09 18:15:01.270640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.376 [2024-12-09 18:15:01.270854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.376 [2024-12-09 18:15:01.270888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.376 [2024-12-09 18:15:01.270901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.376 [2024-12-09 18:15:01.270913] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.376 [2024-12-09 18:15:01.283145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.376 [2024-12-09 18:15:01.283518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.376 [2024-12-09 18:15:01.283556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.376 [2024-12-09 18:15:01.283576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.376 [2024-12-09 18:15:01.283807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.376 [2024-12-09 18:15:01.284040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.376 [2024-12-09 18:15:01.284058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.376 [2024-12-09 18:15:01.284086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.376 [2024-12-09 18:15:01.284097] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.376 [2024-12-09 18:15:01.296798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.376 [2024-12-09 18:15:01.297184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.376 [2024-12-09 18:15:01.297222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.376 [2024-12-09 18:15:01.297255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.376 [2024-12-09 18:15:01.297494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.376 [2024-12-09 18:15:01.297741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.376 [2024-12-09 18:15:01.297763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.376 [2024-12-09 18:15:01.297777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.376 [2024-12-09 18:15:01.297795] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.376 [2024-12-09 18:15:01.310212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.376 [2024-12-09 18:15:01.310600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.376 [2024-12-09 18:15:01.310630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.376 [2024-12-09 18:15:01.310647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.376 [2024-12-09 18:15:01.310877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.376 [2024-12-09 18:15:01.311095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.376 [2024-12-09 18:15:01.311119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.376 [2024-12-09 18:15:01.311151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.376 [2024-12-09 18:15:01.311171] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.376 [2024-12-09 18:15:01.323697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.376 [2024-12-09 18:15:01.324088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.376 [2024-12-09 18:15:01.324125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.376 [2024-12-09 18:15:01.324147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.376 [2024-12-09 18:15:01.324371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.376 [2024-12-09 18:15:01.324619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.376 [2024-12-09 18:15:01.324641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.376 [2024-12-09 18:15:01.324654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.376 [2024-12-09 18:15:01.324666] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.376 [2024-12-09 18:15:01.336999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.376 [2024-12-09 18:15:01.337372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.376 [2024-12-09 18:15:01.337401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.376 [2024-12-09 18:15:01.337417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.376 [2024-12-09 18:15:01.337674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.376 [2024-12-09 18:15:01.337916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.376 [2024-12-09 18:15:01.337935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.376 [2024-12-09 18:15:01.337948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.376 [2024-12-09 18:15:01.337959] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.376 [2024-12-09 18:15:01.350277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.376 [2024-12-09 18:15:01.350662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.376 [2024-12-09 18:15:01.350712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.376 [2024-12-09 18:15:01.350729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.376 [2024-12-09 18:15:01.350997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.376 [2024-12-09 18:15:01.351191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.376 [2024-12-09 18:15:01.351209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.376 [2024-12-09 18:15:01.351227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.376 [2024-12-09 18:15:01.351248] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.376 [2024-12-09 18:15:01.363922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.376 [2024-12-09 18:15:01.364377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.376 [2024-12-09 18:15:01.364406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.376 [2024-12-09 18:15:01.364437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.376 [2024-12-09 18:15:01.364668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.376 [2024-12-09 18:15:01.364900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.377 [2024-12-09 18:15:01.364919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.377 [2024-12-09 18:15:01.364931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.377 [2024-12-09 18:15:01.364942] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.377 [2024-12-09 18:15:01.377349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.377 [2024-12-09 18:15:01.377684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.377 [2024-12-09 18:15:01.377714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.377 [2024-12-09 18:15:01.377731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.377 [2024-12-09 18:15:01.377962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.377 [2024-12-09 18:15:01.378180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.377 [2024-12-09 18:15:01.378199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.377 [2024-12-09 18:15:01.378212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.377 [2024-12-09 18:15:01.378223] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.377 [2024-12-09 18:15:01.390758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.377 [2024-12-09 18:15:01.391149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.377 [2024-12-09 18:15:01.391193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.377 [2024-12-09 18:15:01.391210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.377 [2024-12-09 18:15:01.391483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.377 [2024-12-09 18:15:01.391725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.377 [2024-12-09 18:15:01.391745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.377 [2024-12-09 18:15:01.391758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.377 [2024-12-09 18:15:01.391770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1578053 Killed "${NVMF_APP[@]}" "$@" 00:25:38.377 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:38.377 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:38.377 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:38.377 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:38.377 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:38.377 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1579101 00:25:38.377 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:38.377 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1579101 00:25:38.377 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1579101 ']' 00:25:38.377 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.377 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:38.377 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.377 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:38.377 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:38.377 [2024-12-09 18:15:01.404302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.377 [2024-12-09 18:15:01.404660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.377 [2024-12-09 18:15:01.404691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.377 [2024-12-09 18:15:01.404708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.377 [2024-12-09 18:15:01.404970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.377 [2024-12-09 18:15:01.405213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.377 [2024-12-09 18:15:01.405233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.377 [2024-12-09 18:15:01.405246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.377 [2024-12-09 18:15:01.405258] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.638 [2024-12-09 18:15:01.417926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.638 [2024-12-09 18:15:01.418317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.638 [2024-12-09 18:15:01.418347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.638 [2024-12-09 18:15:01.418377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.638 [2024-12-09 18:15:01.418625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.638 [2024-12-09 18:15:01.418877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.638 [2024-12-09 18:15:01.418914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.638 [2024-12-09 18:15:01.418927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.638 [2024-12-09 18:15:01.418940] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.638 [2024-12-09 18:15:01.431634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.638 [2024-12-09 18:15:01.432103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.638 [2024-12-09 18:15:01.432133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.638 [2024-12-09 18:15:01.432150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.638 [2024-12-09 18:15:01.432381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.638 [2024-12-09 18:15:01.432641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.639 [2024-12-09 18:15:01.432664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.639 [2024-12-09 18:15:01.432679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.639 [2024-12-09 18:15:01.432693] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.639 [2024-12-09 18:15:01.445190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.639 [2024-12-09 18:15:01.445538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.639 [2024-12-09 18:15:01.445598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.639 [2024-12-09 18:15:01.445618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.639 [2024-12-09 18:15:01.445835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.639 [2024-12-09 18:15:01.446075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.639 [2024-12-09 18:15:01.446095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.639 [2024-12-09 18:15:01.446109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.639 [2024-12-09 18:15:01.446121] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.639 [2024-12-09 18:15:01.449879] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:25:38.639 [2024-12-09 18:15:01.449967] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.639 [2024-12-09 18:15:01.458822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.639 [2024-12-09 18:15:01.459245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.639 [2024-12-09 18:15:01.459274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.639 [2024-12-09 18:15:01.459298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.639 [2024-12-09 18:15:01.459558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.639 [2024-12-09 18:15:01.459780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.639 [2024-12-09 18:15:01.459800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.639 [2024-12-09 18:15:01.459815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.639 [2024-12-09 18:15:01.459827] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.639 [2024-12-09 18:15:01.472464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.639 [2024-12-09 18:15:01.472812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.639 [2024-12-09 18:15:01.472841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.639 [2024-12-09 18:15:01.472858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.639 [2024-12-09 18:15:01.473091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.639 [2024-12-09 18:15:01.473308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.639 [2024-12-09 18:15:01.473327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.639 [2024-12-09 18:15:01.473340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.639 [2024-12-09 18:15:01.473352] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.639 [2024-12-09 18:15:01.485872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.639 [2024-12-09 18:15:01.486272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.639 [2024-12-09 18:15:01.486313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.639 [2024-12-09 18:15:01.486331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.639 [2024-12-09 18:15:01.486573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.639 [2024-12-09 18:15:01.486809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.639 [2024-12-09 18:15:01.486838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.639 [2024-12-09 18:15:01.486852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.639 [2024-12-09 18:15:01.486864] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.639 [2024-12-09 18:15:01.499369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.639 [2024-12-09 18:15:01.499730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.639 [2024-12-09 18:15:01.499759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.639 [2024-12-09 18:15:01.499774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.639 [2024-12-09 18:15:01.500004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.639 [2024-12-09 18:15:01.500249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.639 [2024-12-09 18:15:01.500269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.639 [2024-12-09 18:15:01.500283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.639 [2024-12-09 18:15:01.500294] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.639 [2024-12-09 18:15:01.512713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.639 [2024-12-09 18:15:01.513131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.639 [2024-12-09 18:15:01.513158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.639 [2024-12-09 18:15:01.513174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.639 [2024-12-09 18:15:01.513399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.639 [2024-12-09 18:15:01.513642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.639 [2024-12-09 18:15:01.513663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.639 [2024-12-09 18:15:01.513675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.639 [2024-12-09 18:15:01.513687] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.639 [2024-12-09 18:15:01.526118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.639 [2024-12-09 18:15:01.526494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.639 [2024-12-09 18:15:01.526522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.639 [2024-12-09 18:15:01.526539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.639 [2024-12-09 18:15:01.526779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.639 [2024-12-09 18:15:01.527001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.639 [2024-12-09 18:15:01.527020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.639 [2024-12-09 18:15:01.527033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.639 [2024-12-09 18:15:01.527044] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.639 [2024-12-09 18:15:01.528941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:38.639 [2024-12-09 18:15:01.539622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.639 [2024-12-09 18:15:01.540294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.639 [2024-12-09 18:15:01.540345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.639 [2024-12-09 18:15:01.540367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.639 [2024-12-09 18:15:01.540629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.639 [2024-12-09 18:15:01.540859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.639 [2024-12-09 18:15:01.540888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.639 [2024-12-09 18:15:01.540905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.639 [2024-12-09 18:15:01.540928] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.639 [2024-12-09 18:15:01.553078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.639 [2024-12-09 18:15:01.553508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.639 [2024-12-09 18:15:01.553555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.639 [2024-12-09 18:15:01.553575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.639 [2024-12-09 18:15:01.553812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.639 [2024-12-09 18:15:01.554055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.639 [2024-12-09 18:15:01.554074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.639 [2024-12-09 18:15:01.554088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.639 [2024-12-09 18:15:01.554117] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.639 [2024-12-09 18:15:01.566496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.639 [2024-12-09 18:15:01.566873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.639 [2024-12-09 18:15:01.566917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.639 [2024-12-09 18:15:01.566934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.640 [2024-12-09 18:15:01.567163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.640 [2024-12-09 18:15:01.567381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.640 [2024-12-09 18:15:01.567400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.640 [2024-12-09 18:15:01.567412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.640 [2024-12-09 18:15:01.567424] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.640 [2024-12-09 18:15:01.579824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.640 [2024-12-09 18:15:01.580222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.640 [2024-12-09 18:15:01.580249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.640 [2024-12-09 18:15:01.580264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.640 [2024-12-09 18:15:01.580487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.640 [2024-12-09 18:15:01.580750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.640 [2024-12-09 18:15:01.580771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.640 [2024-12-09 18:15:01.580785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.640 [2024-12-09 18:15:01.580796] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.640 [2024-12-09 18:15:01.587749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.640 [2024-12-09 18:15:01.587780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.640 [2024-12-09 18:15:01.587807] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:38.640 [2024-12-09 18:15:01.587818] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:38.640 [2024-12-09 18:15:01.587828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.640 [2024-12-09 18:15:01.589220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:38.640 [2024-12-09 18:15:01.589277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:38.640 [2024-12-09 18:15:01.589280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.640 [2024-12-09 18:15:01.593346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.640 [2024-12-09 18:15:01.593777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.640 [2024-12-09 18:15:01.593809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.640 [2024-12-09 18:15:01.593827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.640 [2024-12-09 18:15:01.594064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.640 [2024-12-09 18:15:01.594281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.640 [2024-12-09 18:15:01.594301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.640 [2024-12-09 18:15:01.594316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.640 [2024-12-09 18:15:01.594331] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.640 [2024-12-09 18:15:01.607032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.640 [2024-12-09 18:15:01.607576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.640 [2024-12-09 18:15:01.607615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.640 [2024-12-09 18:15:01.607646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.640 [2024-12-09 18:15:01.607889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.640 [2024-12-09 18:15:01.608126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.640 [2024-12-09 18:15:01.608148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.640 [2024-12-09 18:15:01.608166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.640 [2024-12-09 18:15:01.608181] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.640 [2024-12-09 18:15:01.620749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.640 [2024-12-09 18:15:01.621283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.640 [2024-12-09 18:15:01.621323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.640 [2024-12-09 18:15:01.621344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.640 [2024-12-09 18:15:01.621596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.640 [2024-12-09 18:15:01.621826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.640 [2024-12-09 18:15:01.621858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.640 [2024-12-09 18:15:01.621875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.640 [2024-12-09 18:15:01.621891] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.640 [2024-12-09 18:15:01.634299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.640 [2024-12-09 18:15:01.634865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.640 [2024-12-09 18:15:01.634906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.640 [2024-12-09 18:15:01.634926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.640 [2024-12-09 18:15:01.635168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.640 [2024-12-09 18:15:01.635388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.640 [2024-12-09 18:15:01.635409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.640 [2024-12-09 18:15:01.635427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.640 [2024-12-09 18:15:01.635442] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.640 [2024-12-09 18:15:01.647981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.640 [2024-12-09 18:15:01.648456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.640 [2024-12-09 18:15:01.648492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.640 [2024-12-09 18:15:01.648511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.640 [2024-12-09 18:15:01.648754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.640 [2024-12-09 18:15:01.648990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.640 [2024-12-09 18:15:01.649011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.640 [2024-12-09 18:15:01.649027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.640 [2024-12-09 18:15:01.649042] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.640 [2024-12-09 18:15:01.661527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.640 [2024-12-09 18:15:01.662066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.640 [2024-12-09 18:15:01.662104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.640 [2024-12-09 18:15:01.662125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.640 [2024-12-09 18:15:01.662365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.640 [2024-12-09 18:15:01.662609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.640 [2024-12-09 18:15:01.662631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.640 [2024-12-09 18:15:01.662660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.640 [2024-12-09 18:15:01.662677] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.640 [2024-12-09 18:15:01.675151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.640 [2024-12-09 18:15:01.675600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.640 [2024-12-09 18:15:01.675635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.640 [2024-12-09 18:15:01.675654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.640 [2024-12-09 18:15:01.675913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.640 [2024-12-09 18:15:01.676130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.640 [2024-12-09 18:15:01.676150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.640 [2024-12-09 18:15:01.676165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.640 [2024-12-09 18:15:01.676179] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.901 [2024-12-09 18:15:01.688783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.901 [2024-12-09 18:15:01.689132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.901 [2024-12-09 18:15:01.689161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.901 [2024-12-09 18:15:01.689177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.901 [2024-12-09 18:15:01.689393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.901 [2024-12-09 18:15:01.689632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.901 [2024-12-09 18:15:01.689663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.901 [2024-12-09 18:15:01.689677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.901 [2024-12-09 18:15:01.689689] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.901 [2024-12-09 18:15:01.702339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.901 [2024-12-09 18:15:01.702700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.901 [2024-12-09 18:15:01.702729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.901 [2024-12-09 18:15:01.702746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.901 [2024-12-09 18:15:01.702977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.901 [2024-12-09 18:15:01.703190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.901 [2024-12-09 18:15:01.703210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.901 [2024-12-09 18:15:01.703224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.901 [2024-12-09 18:15:01.703236] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.901 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:38.901 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:38.901 [2024-12-09 18:15:01.715928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.901 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:38.901 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:38.901 [2024-12-09 18:15:01.716265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.902 [2024-12-09 18:15:01.716293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.902 [2024-12-09 18:15:01.716310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.902 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:38.902 [2024-12-09 18:15:01.716527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.902 [2024-12-09 18:15:01.716756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.902 [2024-12-09 18:15:01.716778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.902 [2024-12-09 18:15:01.716792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.902 [2024-12-09 18:15:01.716805] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.902 [2024-12-09 18:15:01.729451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.902 [2024-12-09 18:15:01.729804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.902 [2024-12-09 18:15:01.729832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.902 [2024-12-09 18:15:01.729860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.902 [2024-12-09 18:15:01.730091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.902 [2024-12-09 18:15:01.730306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.902 [2024-12-09 18:15:01.730326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.902 [2024-12-09 18:15:01.730340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.902 [2024-12-09 18:15:01.730352] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.902 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.902 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:38.902 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.902 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:38.902 [2024-12-09 18:15:01.743138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.902 [2024-12-09 18:15:01.743465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.902 [2024-12-09 18:15:01.743493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.902 [2024-12-09 18:15:01.743510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.902 [2024-12-09 18:15:01.743734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.902 [2024-12-09 18:15:01.743973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.902 [2024-12-09 18:15:01.743994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.902 [2024-12-09 18:15:01.744007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.902 [2024-12-09 18:15:01.744019] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.902 [2024-12-09 18:15:01.744023] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.902 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.902 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:38.902 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.902 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:38.902 [2024-12-09 18:15:01.756761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.902 [2024-12-09 18:15:01.757197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.902 [2024-12-09 18:15:01.757226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.902 [2024-12-09 18:15:01.757243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.902 [2024-12-09 18:15:01.757478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.902 [2024-12-09 18:15:01.757729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.902 [2024-12-09 18:15:01.757751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.902 [2024-12-09 18:15:01.757766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.902 [2024-12-09 18:15:01.757779] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.902 [2024-12-09 18:15:01.770266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.902 [2024-12-09 18:15:01.770686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.902 [2024-12-09 18:15:01.770714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.902 [2024-12-09 18:15:01.770731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.902 [2024-12-09 18:15:01.770961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.902 [2024-12-09 18:15:01.771184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.902 [2024-12-09 18:15:01.771203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.902 [2024-12-09 18:15:01.771216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.902 [2024-12-09 18:15:01.771228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.902 [2024-12-09 18:15:01.783911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.902 [2024-12-09 18:15:01.784271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.902 [2024-12-09 18:15:01.784301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.902 [2024-12-09 18:15:01.784317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.902 [2024-12-09 18:15:01.784573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.902 [2024-12-09 18:15:01.784788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.902 [2024-12-09 18:15:01.784808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.902 [2024-12-09 18:15:01.784823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.902 [2024-12-09 18:15:01.784836] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.902 Malloc0 00:25:38.902 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.902 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:38.902 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.902 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:38.902 [2024-12-09 18:15:01.797618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.902 [2024-12-09 18:15:01.798039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.902 [2024-12-09 18:15:01.798068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.902 [2024-12-09 18:15:01.798085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.902 [2024-12-09 18:15:01.798317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.902 [2024-12-09 18:15:01.798560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.902 [2024-12-09 18:15:01.798581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.902 [2024-12-09 18:15:01.798596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.902 [2024-12-09 18:15:01.798609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.902 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.902 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:38.902 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.902 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:38.902 3471.50 IOPS, 13.56 MiB/s [2024-12-09T17:15:01.943Z] 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.902 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:38.902 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.902 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:38.902 [2024-12-09 18:15:01.811203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.902 [2024-12-09 18:15:01.811562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.902 [2024-12-09 18:15:01.811591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2660 with addr=10.0.0.2, port=4420 00:25:38.902 [2024-12-09 18:15:01.811608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2660 is same with the state(6) to be set 00:25:38.902 [2024-12-09 18:15:01.811824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2660 (9): Bad file descriptor 00:25:38.902 [2024-12-09 18:15:01.812061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.902 [2024-12-09 18:15:01.812081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.902 [2024-12-09 18:15:01.812094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.902 [2024-12-09 18:15:01.812107] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.902 [2024-12-09 18:15:01.813068] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.902 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.902 18:15:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1578222 00:25:38.903 [2024-12-09 18:15:01.824863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.903 [2024-12-09 18:15:01.854084] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:25:41.211 4121.71 IOPS, 16.10 MiB/s [2024-12-09T17:15:04.818Z] 4693.75 IOPS, 18.33 MiB/s [2024-12-09T17:15:06.191Z] 5129.89 IOPS, 20.04 MiB/s [2024-12-09T17:15:07.125Z] 5455.60 IOPS, 21.31 MiB/s [2024-12-09T17:15:08.059Z] 5734.18 IOPS, 22.40 MiB/s [2024-12-09T17:15:08.995Z] 5965.00 IOPS, 23.30 MiB/s [2024-12-09T17:15:09.932Z] 6160.31 IOPS, 24.06 MiB/s [2024-12-09T17:15:10.866Z] 6333.00 IOPS, 24.74 MiB/s [2024-12-09T17:15:10.866Z] 6475.40 IOPS, 25.29 MiB/s 00:25:47.825 Latency(us) 00:25:47.825 [2024-12-09T17:15:10.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.825 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:47.825 Verification LBA range: start 0x0 length 0x4000 00:25:47.825 Nvme1n1 : 15.01 6477.66 25.30 9789.90 0.00 7845.16 570.41 18932.62 00:25:47.825 [2024-12-09T17:15:10.866Z] =================================================================================================================== 00:25:47.825 [2024-12-09T17:15:10.866Z] Total : 6477.66 25.30 9789.90 0.00 7845.16 570.41 18932.62 00:25:48.083 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:48.083 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:48.083 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.083 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:48.083 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.083 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:48.083 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:48.083 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:48.083 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:25:48.083 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:48.083 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:25:48.083 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:48.083 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:48.083 rmmod nvme_tcp 00:25:48.083 rmmod nvme_fabrics 00:25:48.083 rmmod nvme_keyring 00:25:48.341 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:48.341 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:25:48.341 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:25:48.342 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1579101 ']' 00:25:48.342 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1579101 00:25:48.342 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1579101 ']' 00:25:48.342 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1579101 00:25:48.342 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:25:48.342 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:48.342 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1579101 00:25:48.342 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:48.342 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:48.342 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1579101' 00:25:48.342 killing process with pid 1579101 00:25:48.342 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1579101 00:25:48.342 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1579101 00:25:48.602 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:48.602 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:48.602 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:48.602 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:25:48.602 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:25:48.602 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:48.602 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:25:48.602 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:48.602 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:48.602 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.602 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.602 18:15:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.509 18:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:50.509 00:25:50.509 real 0m22.586s 00:25:50.509 user 0m59.254s 00:25:50.509 sys 0m4.736s 00:25:50.509 18:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:50.509 18:15:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:50.509 ************************************ 00:25:50.509 END TEST nvmf_bdevperf 00:25:50.509 ************************************ 00:25:50.509 18:15:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:50.509 18:15:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:50.509 18:15:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:50.509 18:15:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.509 ************************************ 00:25:50.509 START TEST nvmf_target_disconnect 00:25:50.509 ************************************ 00:25:50.509 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:50.768 * Looking for test storage... 00:25:50.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:50.768 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:50.768 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:25:50.768 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:50.768 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:50.768 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:50.768 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:50.768 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:50.768 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:25:50.768 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:25:50.768 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:25:50.768 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:25:50.768 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:50.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.769 --rc genhtml_branch_coverage=1 00:25:50.769 --rc genhtml_function_coverage=1 00:25:50.769 --rc genhtml_legend=1 00:25:50.769 --rc geninfo_all_blocks=1 00:25:50.769 --rc geninfo_unexecuted_blocks=1 00:25:50.769 00:25:50.769 ' 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:50.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.769 --rc genhtml_branch_coverage=1 00:25:50.769 --rc genhtml_function_coverage=1 00:25:50.769 --rc genhtml_legend=1 00:25:50.769 --rc geninfo_all_blocks=1 00:25:50.769 --rc geninfo_unexecuted_blocks=1 00:25:50.769 00:25:50.769 ' 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:50.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.769 --rc genhtml_branch_coverage=1 00:25:50.769 --rc genhtml_function_coverage=1 00:25:50.769 --rc genhtml_legend=1 00:25:50.769 --rc geninfo_all_blocks=1 00:25:50.769 --rc geninfo_unexecuted_blocks=1 00:25:50.769 00:25:50.769 ' 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:50.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.769 --rc genhtml_branch_coverage=1 00:25:50.769 --rc genhtml_function_coverage=1 00:25:50.769 --rc genhtml_legend=1 00:25:50.769 --rc geninfo_all_blocks=1 00:25:50.769 --rc geninfo_unexecuted_blocks=1 00:25:50.769 00:25:50.769 ' 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:50.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:25:50.769 18:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:53.304 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:53.304 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:53.304 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:53.304 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:53.304 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:53.305 18:15:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:53.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:25:53.305 00:25:53.305 --- 10.0.0.2 ping statistics --- 00:25:53.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.305 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:53.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:25:53.305 00:25:53.305 --- 10.0.0.1 ping statistics --- 00:25:53.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.305 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:53.305 ************************************ 00:25:53.305 START TEST nvmf_target_disconnect_tc1 00:25:53.305 ************************************ 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:53.305 [2024-12-09 18:15:16.148333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.305 [2024-12-09 18:15:16.148392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1166f40 with addr=10.0.0.2, port=4420 00:25:53.305 [2024-12-09 18:15:16.148431] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:53.305 [2024-12-09 18:15:16.148450] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:53.305 [2024-12-09 18:15:16.148463] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:25:53.305 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:53.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:53.305 Initializing NVMe Controllers 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:53.305 00:25:53.305 real 0m0.095s 00:25:53.305 user 0m0.043s 00:25:53.305 sys 0m0.052s 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:53.305 ************************************ 00:25:53.305 END TEST nvmf_target_disconnect_tc1 00:25:53.305 ************************************ 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:53.305 ************************************ 00:25:53.305 START TEST nvmf_target_disconnect_tc2 00:25:53.305 ************************************ 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1582784 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1582784 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1582784 ']' 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:53.305 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:53.305 [2024-12-09 18:15:16.254323] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:25:53.305 [2024-12-09 18:15:16.254414] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.305 [2024-12-09 18:15:16.327923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:53.564 [2024-12-09 18:15:16.386641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.564 [2024-12-09 18:15:16.386695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.564 [2024-12-09 18:15:16.386723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:53.564 [2024-12-09 18:15:16.386735] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:53.564 [2024-12-09 18:15:16.386748] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.564 [2024-12-09 18:15:16.388258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:53.564 [2024-12-09 18:15:16.388320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:53.564 [2024-12-09 18:15:16.388388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:25:53.564 [2024-12-09 18:15:16.388391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:53.564 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:53.564 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:53.564 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:53.564 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:53.564 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:53.564 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:53.564 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:53.564 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.564 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:53.564 Malloc0 00:25:53.564 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.564 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:53.564 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.564 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:53.564 [2024-12-09 18:15:16.569314] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:53.565 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.565 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:53.565 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.565 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:53.565 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.565 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:53.565 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.565 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:53.565 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.565 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:53.565 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.565 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:53.565 [2024-12-09 18:15:16.597638] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.565 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.565 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:53.565 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.565 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:53.825 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.825 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1582818 00:25:53.825 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:53.825 18:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:55.744 18:15:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1582784 00:25:55.744 18:15:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Write completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Write completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Write completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Write completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Write completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Write completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Write completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Write completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Write completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Write completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Write completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Write completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Write completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 [2024-12-09 18:15:18.623497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Write completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Write completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Write completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Write completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Write completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Write completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Read completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.744 Write completed with error (sct=0, sc=8) 00:25:55.744 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 [2024-12-09 18:15:18.623870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.745 [2024-12-09 18:15:18.624056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.745 [2024-12-09 18:15:18.624087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.745 qpair failed and we were unable to recover it. 00:25:55.745 [2024-12-09 18:15:18.624189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.745 [2024-12-09 18:15:18.624216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.745 qpair failed and we were unable to recover it. 00:25:55.745 [2024-12-09 18:15:18.624311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.745 [2024-12-09 18:15:18.624335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.745 qpair failed and we were unable to recover it. 00:25:55.745 [2024-12-09 18:15:18.624428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.745 [2024-12-09 18:15:18.624454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.745 qpair failed and we were unable to recover it. 00:25:55.745 [2024-12-09 18:15:18.624570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.745 [2024-12-09 18:15:18.624596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.745 qpair failed and we were unable to recover it. 00:25:55.745 [2024-12-09 18:15:18.624686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.745 [2024-12-09 18:15:18.624711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.745 qpair failed and we were unable to recover it. 00:25:55.745 [2024-12-09 18:15:18.624814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.745 [2024-12-09 18:15:18.624838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.745 qpair failed and we were unable to recover it. 00:25:55.745 [2024-12-09 18:15:18.624960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.745 [2024-12-09 18:15:18.624986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.745 qpair failed and we were unable to recover it. 00:25:55.745 [2024-12-09 18:15:18.625086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.745 [2024-12-09 18:15:18.625111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.745 qpair failed and we were unable to recover it. 00:25:55.745 [2024-12-09 18:15:18.625193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.745 [2024-12-09 18:15:18.625218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.745 qpair failed and we were unable to recover it. 00:25:55.745 [2024-12-09 18:15:18.625332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.745 [2024-12-09 18:15:18.625357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.745 qpair failed and we were unable to recover it. 00:25:55.745 [2024-12-09 18:15:18.625456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.745 [2024-12-09 18:15:18.625482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.745 qpair failed and we were unable to recover it. 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 [2024-12-09 18:15:18.625806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Read completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 Write completed with error (sct=0, sc=8) 00:25:55.745 starting I/O failed 00:25:55.745 [2024-12-09 18:15:18.626101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.745 [2024-12-09 18:15:18.626331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.745 [2024-12-09 18:15:18.626372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.745 qpair failed and we were unable to recover it. 00:25:55.745 [2024-12-09 18:15:18.626502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.626528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.626642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.626673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.626762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.626788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.626933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.626959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.627036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.627061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.627170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.627195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.627289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.627314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.627430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.627455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.627541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.627573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.627664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.627689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.627799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.627846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.627970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.627997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.628079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.628106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.628228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.628254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.628409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.628456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.628567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.628615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.628718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.628744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.628832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.628858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.628944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.628969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.629055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.629079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.629197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.629222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.629323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.629362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.629468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.629497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.629594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.629622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.629720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.629746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.629941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.629967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.630057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.630083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.630172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.630198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.630340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.630366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.630454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.630479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.630575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.630608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.630707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.630743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.630867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.630896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.630989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.631015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.631164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.631194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.631318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.631344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.631435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.631462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.631555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.631581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.631689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.746 [2024-12-09 18:15:18.631715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.746 qpair failed and we were unable to recover it. 00:25:55.746 [2024-12-09 18:15:18.631798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.631823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.631931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.631956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.632068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.632094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.632219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.632246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.632361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.632388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.632466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.632491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.632611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.632638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.632728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.632754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.632877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.632903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.632999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.633026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.633150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.633177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.633289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.633315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.633453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.633478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.633563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.633589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.633731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.633770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.633896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.633924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.634044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.634071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.634212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.634238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.634366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.634394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.634504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.634542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.634664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.634691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.634779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.634805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.634922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.634954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.635040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.635068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.635185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.635213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.635346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.635384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.635465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.635492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.635599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.635625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.635719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.635744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.635836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.635861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.636712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.636740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.636856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.636882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.636969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.636995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.637089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.637115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.637195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.637220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.637305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.637331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.637418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.637444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.637565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.637591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.637688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.637715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.747 [2024-12-09 18:15:18.637795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.747 [2024-12-09 18:15:18.637821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.747 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.637907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.637932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.638043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.638068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.638209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.638236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.638364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.638393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.638559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.638606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.638702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.638728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.638846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.638871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.638984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.639009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.639121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.639146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.639263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.639294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.639422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.639449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.639555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.639582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.639663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.639689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.639768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.639794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.639912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.639937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.640061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.640087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.640208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.640236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.640353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.640379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.640504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.640530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.640659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.640686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.640765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.640790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.640903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.640928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.641045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.641071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.641193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.641220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.641339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.641367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.641450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.641477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.641602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.641641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.641745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.641772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.641919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.641945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.642061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.642086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.642202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.642229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.642321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.642347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.642444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.642471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.642559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.642585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.642700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.642726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.642838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.642864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.748 [2024-12-09 18:15:18.642982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.748 [2024-12-09 18:15:18.643008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.748 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.643152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.643177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.643258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.643284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.643371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.643396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.643488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.643514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.643646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.643684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.643774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.643802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.643946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.643972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.644063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.644089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.644211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.644238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.644322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.644347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.644446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.644485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.644605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.644634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.644751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.644783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.644867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.644892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.645023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.645075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.645257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.645312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.645424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.645449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.645595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.645622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.645737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.645763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.645878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.645905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.645998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.646024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.646111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.646137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.646242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.646268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.646349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.646375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.646492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.646517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.646613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.646639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.646735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.646761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.646880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.646906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.647021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.647047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.647129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.647155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.647272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.647298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.647415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.647442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.647543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.647588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.647717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.749 [2024-12-09 18:15:18.647756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.749 qpair failed and we were unable to recover it. 00:25:55.749 [2024-12-09 18:15:18.647888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.647915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.648082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.648108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.648248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.648273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.648418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.648444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.648532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.648564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.648647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.648677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.648820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.648846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.648953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.648978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.649092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.649117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.649204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.649234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.649389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.649427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.649533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.649579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.649725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.649752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.649882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.649908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.650103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.650129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.650274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.650301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.650416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.650441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.650522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.650553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.650665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.650691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.650807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.650833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.650972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.650997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.651139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.651165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.651289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.651317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.651444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.651483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.651601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.651629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.651745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.651771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.651852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.651877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.651983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.652008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.652122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.652150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.652292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.652320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.652436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.652462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.652589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.652616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.652742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.652768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.652858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.652885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.653029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.653054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.653146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.653172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.653254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.653279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.653393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.653419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.653513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.653559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.750 qpair failed and we were unable to recover it. 00:25:55.750 [2024-12-09 18:15:18.653711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.750 [2024-12-09 18:15:18.653738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.653827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.653853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.653960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.653985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.654091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.654117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.654198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.654225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.654340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.654366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.654446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.654477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.654585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.654611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.654726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.654752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.654833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.654859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.654941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.654966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.655049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.655076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.655187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.655212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.655300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.655326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.655439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.655464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.655604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.655631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.655748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.655773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.655859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.655884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.655979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.656005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.656090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.656117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.656208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.656234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.656347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.656372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.656485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.656510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.656597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.656623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.656709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.656735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.656824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.656850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.656930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.656955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.657072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.657098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.657180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.657204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.657291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.657316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.657448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.657487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.657603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.657631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.657746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.657771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.657909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.657940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.658054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.658079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.658194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.658220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.658332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.658359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.658504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.658533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.658624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.658652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.751 [2024-12-09 18:15:18.658771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.751 [2024-12-09 18:15:18.658798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.751 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.658937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.658962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.659146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.659197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.659372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.659435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.659567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.659595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.659701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.659739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.659896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.659961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.660103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.660129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.660242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.660268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.660363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.660388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.660515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.660561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.660687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.660714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.660832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.660860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.660946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.660972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.661130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.661182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.661261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.661287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.661478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.661504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.661619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.661658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.661779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.661806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.661903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.661929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.662038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.662063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.662147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.662174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.662258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.662285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.662397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.662425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.662572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.662599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.662683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.662710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.662796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.662823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.662937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.662963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.663116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.663155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.663249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.663276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.663358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.663383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.663581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.663608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.663722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.663748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.663860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.663887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.663973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.664006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.664149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.664174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.664370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.664396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.664512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.664539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.664661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.664686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.664802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.752 [2024-12-09 18:15:18.664828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.752 qpair failed and we were unable to recover it. 00:25:55.752 [2024-12-09 18:15:18.664939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.664965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.665104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.665129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.665218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.665244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.665322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.665347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.665439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.665466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.665577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.665603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.665747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.665772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.665916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.665942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.666090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.666116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.666239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.666265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.666352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.666378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.666495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.666522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.666613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.666641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.666768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.666806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.666908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.666935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.667010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.667036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.667144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.667169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.667282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.667307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.667387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.667413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.667551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.667577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.667657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.667683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.667811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.667850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.667973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.668001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.668121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.668147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.668259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.668284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.668397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.668423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.668536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.668569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.668656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.668682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.668802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.668828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.668967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.668993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.669136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.669164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.669275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.669301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.669387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.669413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.669532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.753 [2024-12-09 18:15:18.669564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.753 qpair failed and we were unable to recover it. 00:25:55.753 [2024-12-09 18:15:18.669687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.669721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.669835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.669861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.669977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.670004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.670118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.670144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.670253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.670278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.670469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.670495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.670648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.670687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.670811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.670847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.670981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.671007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.671127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.671154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.671317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.671367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.671464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.671493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.671623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.671649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.671737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.671763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.671889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.671915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.672105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.672130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.672267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.672292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.672410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.672439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.672597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.672637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.672733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.672760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.672877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.672903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.673020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.673046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.673156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.673181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.673274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.673302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.673421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.673449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.673614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.673653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.673740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.673766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.673941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.673995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.674084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.674109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.674275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.674326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.674440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.674466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.674569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.674609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.674709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.674737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.674832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.674860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.674969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.674995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.675120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.675146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.675237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.675263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.675371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.675396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.675490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.675529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.754 qpair failed and we were unable to recover it. 00:25:55.754 [2024-12-09 18:15:18.675677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.754 [2024-12-09 18:15:18.675716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.675803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.675835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.675955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.675981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.676119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.676145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.676258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.676283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.676359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.676383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.676506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.676555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.676675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.676702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.676813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.676839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.676987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.677039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.677156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.677209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.677316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.677342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.677468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.677494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.677583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.677610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.677696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.677721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.677816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.677841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.678023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.678076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.678253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.678278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.678390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.678415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.678493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.678518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.678613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.678641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.678739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.678768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.678859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.678888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.678973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.679000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.679174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.679201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.679311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.679336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.679445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.679471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.679582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.679607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.679688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.679718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.679800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.679825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.679941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.679967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.680088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.680114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.680194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.680219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.680292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.680317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.680399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.680423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.680527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.680560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.680644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.680668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.680749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.680773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.680886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.680911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.681029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.755 [2024-12-09 18:15:18.681057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.755 qpair failed and we were unable to recover it. 00:25:55.755 [2024-12-09 18:15:18.681175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.681202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.681315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.681340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.681462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.681488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.681630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.681657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.681857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.681884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.681996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.682022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.682107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.682133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.682246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.682271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.682354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.682379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.682460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.682485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.682570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.682595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.682712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.682737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.682851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.682876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.682963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.682991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.683071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.683098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.683239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.683270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.683361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.683387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.683498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.683524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.683615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.683641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.683757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.683782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.683951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.683976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.684139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.684192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.684300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.684326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.684444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.684468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.684571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.684627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.684779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.684806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.684922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.684949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.685099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.685152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.685302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.685350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.685487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.685527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.685640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.685666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.685759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.685785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.685899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.685923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.686060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.686114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.686291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.686316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.686457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.686482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.686563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.686589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.686688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.686716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.686845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.686871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.756 qpair failed and we were unable to recover it. 00:25:55.756 [2024-12-09 18:15:18.687029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.756 [2024-12-09 18:15:18.687081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.687256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.687282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.687390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.687428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.687557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.687590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.687739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.687765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.687855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.687882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.688024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.688049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.688132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.688159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.688301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.688326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.688414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.688440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.688514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.688539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.688661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.688686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.688777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.688802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.688887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.688912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.688984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.689009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.689137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.689176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.689297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.689326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.689466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.689505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.689636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.689665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.689761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.689788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.689919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.689945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.690038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.690064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.690186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.690216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.690375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.690414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.690510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.690537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.690656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.690681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.690798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.690823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.690911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.690936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.691009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.691034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.691112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.691137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.691232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.691271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.691383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.691410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.691530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.691565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.691655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.691681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.691765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.757 [2024-12-09 18:15:18.691791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.757 qpair failed and we were unable to recover it. 00:25:55.757 [2024-12-09 18:15:18.691909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.691935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.692052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.692078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.692220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.692246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.692326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.692350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.692432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.692457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.692543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.692583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.692691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.692730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.692878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.692905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.692991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.693017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.693103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.693129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.693239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.693264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.693378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.693405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.693521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.693557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.693651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.693680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.693767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.693793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.693935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.693962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.694100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.694126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.694240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.694266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.694340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.694367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.694512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.694538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.694628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.694654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.694746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.694771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.694885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.694910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.694988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.695013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.695147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.695172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.695251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.695276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.695354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.695379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.695485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.695512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.695606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.695635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.695726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.695752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.695837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.695862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.695951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.695976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.696070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.696096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.696204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.696230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.696370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.696395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.696502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.696532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.696625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.696651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.696756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.696781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.696889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.696914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.758 [2024-12-09 18:15:18.697008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.758 [2024-12-09 18:15:18.697034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.758 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.697142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.697167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.697272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.697311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.697432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.697458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.697569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.697594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.697681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.697706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.697791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.697816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.697929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.697954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.698070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.698095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.698232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.698258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.698340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.698365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.698441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.698465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.698578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.698604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.698712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.698737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.698818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.698843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.698959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.698985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.699097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.699122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.699238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.699265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.699360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.699400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.699495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.699523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.699670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.699697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.699833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.699859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.699975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.700001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.700090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.700123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.700240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.700265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.700357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.700383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.700467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.700492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.700642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.700668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.700781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.700806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.700886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.700911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.701028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.701054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.701167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.701192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.701321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.701360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.701478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.701506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.701615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.701641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.701783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.701809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.701988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.702050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.702256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.702301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.702383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.702408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.702533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.759 [2024-12-09 18:15:18.702580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.759 qpair failed and we were unable to recover it. 00:25:55.759 [2024-12-09 18:15:18.702796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.702835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.703024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.703074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.703156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.703183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.703303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.703328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.703439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.703464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.703572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.703611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.703711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.703738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.703827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.703855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.703981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.704006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.704150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.704176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.704292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.704324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.704435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.704463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.704561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.704588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.704777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.704803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.704913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.704939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.705078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.705103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.705218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.705243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.705330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.705357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.705562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.705601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.705728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.705766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.705916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.705944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.706057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.706082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.706198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.706224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.706364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.706391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.706498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.706537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.706669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.706697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.706812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.706838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.706935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.706994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.707093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.707138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.707326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.707352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.707444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.707471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.707585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.707611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.707736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.707762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.707850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.707875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.707965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.707990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.708074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.708099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.708235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.708260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.708455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.708481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.708620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.760 [2024-12-09 18:15:18.708647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.760 qpair failed and we were unable to recover it. 00:25:55.760 [2024-12-09 18:15:18.708729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.708755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.708838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.708865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.708947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.708973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.709113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.709138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.709230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.709256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.709386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.709425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.709552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.709580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.709655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.709681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.709792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.709817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.709954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.710003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.710096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.710121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.710205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.710235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.710346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.710371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.710449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.710474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.710560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.710587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.710701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.710727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.710803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.710830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.710935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.710960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.711040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.711067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.711196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.711234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.711350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.711376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.711489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.711514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.711610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.711635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.711747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.711772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.711856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.711880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.711970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.711995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.712111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.712136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.712223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.712247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.712318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.712343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.712458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.712484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.712609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.712649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.712736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.712764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.712878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.712904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.713053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.713079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.713186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.713211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.713293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.713318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.713405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.713431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.713515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.713541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.713632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.761 [2024-12-09 18:15:18.713662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.761 qpair failed and we were unable to recover it. 00:25:55.761 [2024-12-09 18:15:18.713748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.713775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.713887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.713912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.713987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.714011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.714123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.714150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.714269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.714308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.714432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.714471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.714565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.714591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.714705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.714730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.714814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.714839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.714914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.714939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.715025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.715050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.715134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.715158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.715270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.715296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.715417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.715442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.715568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.715594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.715704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.715729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.715847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.715872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.715977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.716003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.716083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.716108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.716188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.716216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.716317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.716356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.716452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.716481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.716572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.716601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.716690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.716716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.716826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.716852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.716941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.716966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.717054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.717088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.717204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.717231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.717320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.717347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.717442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.717470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.717582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.717612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.717729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.717756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.717900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.717926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.718015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.718041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.718228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.762 [2024-12-09 18:15:18.718279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.762 qpair failed and we were unable to recover it. 00:25:55.762 [2024-12-09 18:15:18.718368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.718393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.718511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.718537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.718686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.718712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.718795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.718822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.718912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.718938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.719056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.719081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.719209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.719265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.719379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.719405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.719529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.719583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.719687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.719715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.719807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.719833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.719991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.720016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.720187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.720213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.720313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.720352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.720499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.720527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.720648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.720687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.720812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.720839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.720950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.720976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.721102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.721129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.721250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.721300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.721424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.721463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.721613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.721640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.721757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.721783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.721870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.721895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.722010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.722035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.722130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.722155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.722243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.722269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.722371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.722410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.722511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.722539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.722676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.722715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.722840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.722867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.722961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.722987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.723118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.723144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.723288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.723337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.723453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.723478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.723573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.723612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.723700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.723728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.723818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.723844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.723931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.763 [2024-12-09 18:15:18.723957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.763 qpair failed and we were unable to recover it. 00:25:55.763 [2024-12-09 18:15:18.724096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.724122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.724208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.724233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.724371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.724397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.724540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.724579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.724708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.724748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.724894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.724933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.725060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.725087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.725200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.725225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.725367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.725392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.725466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.725491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.725626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.725665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.725795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.725824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.725982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.726037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.726175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.726225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.726316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.726342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.726449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.726475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.726601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.726627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.726737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.726763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.726879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.726905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.727019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.727050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.727242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.727268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.727381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.727407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.727493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.727518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.727639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.727665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.727754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.727781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.727925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.727951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.728090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.728115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.728234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.728263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.728377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.728403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.728515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.728541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.728638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.728664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.728778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.728804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.728955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.728981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.729101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.729128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.729243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.729269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.729355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.729380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.729571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.729597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.729680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.729706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.729816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.729842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.764 [2024-12-09 18:15:18.729953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.764 [2024-12-09 18:15:18.729979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.764 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.730090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.730116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.730225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.730250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.730369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.730395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.730491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.730529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.730636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.730675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.730761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.730788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.730903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.730930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.731011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.731037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.731176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.731202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.731315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.731341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.731427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.731456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.731540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.731581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.731699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.731725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.731881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.731922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.732078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.732133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.732280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.732323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.732406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.732433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.732577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.732604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.732719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.732744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.732882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.732913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.733027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.733053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.733199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.733248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.733336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.733364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.733489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.733527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.733659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.733697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.733813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.733840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.733961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.733987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.734127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.734153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.734364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.734422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.734536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.734570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.734664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.734690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.734778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.734804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.734957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.735006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.735104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.735130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.735247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.735273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.735377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.735416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.735558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.735598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.735722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.735750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.765 [2024-12-09 18:15:18.735895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-12-09 18:15:18.735921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.765 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.736026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.736068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.736218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.736257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.736361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.736386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.736502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.736530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.736626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.736652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.736762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.736787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.736895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.736920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.737008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.737038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.737136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.737175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.737308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.737333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.737431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.737471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.737672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.737700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.737812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.737839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.737954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.737978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.738103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.738128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.738220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.738246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.738327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.738352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.738451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.738490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.738591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.738619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.738712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.738739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.738919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.738973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.739129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.739177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.739266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.739292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.739430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.739456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.739532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.739576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.739657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.739682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.739802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.739828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.739909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.739935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.740012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.740037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.740151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.740177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.740328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.740369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.740513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.740540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.740643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.740669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.740789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-12-09 18:15:18.740815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.766 qpair failed and we were unable to recover it. 00:25:55.766 [2024-12-09 18:15:18.740927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.740952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.741060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.741085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.741202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.741227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.741308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.741333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.741426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.741453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.741594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.741620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.741697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.741722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.741849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.741888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.742003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.742031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.742160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.742199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.742317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.742344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.742464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.742490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.742619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.742646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.742733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.742764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.742881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.742907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.743020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.743045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.743191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.743219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.743346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.743385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.743534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.743574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.743689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.743716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.743807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.743833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.743911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.743937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.744025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.744053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.744135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.744162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.744346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.744396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.744482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.744507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.744623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.744649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.744766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.744792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.744879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.744904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.745016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.745041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.745197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.745236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.745368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.745396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.745538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.745569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.745689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.745714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.745804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.745830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.745974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.745999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.746087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.746113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.746202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.746227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.746325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.746365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.767 qpair failed and we were unable to recover it. 00:25:55.767 [2024-12-09 18:15:18.746489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.767 [2024-12-09 18:15:18.746516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.746622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.746661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.746751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.746777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.746920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.746945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.747060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.747086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.747201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.747226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.747314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.747342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.747512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.747562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.747690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.747718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.747844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.747885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.748030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.748087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.748228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.748274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.748394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.748421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.748541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.748579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.748664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.748690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.748784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.748809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.748948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.748973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.749084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.749109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.749186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.749211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.749323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.749348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.749475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.749514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.749646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.749675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.749791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.749817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.749908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.749934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.750023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.750048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.750133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.750159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.750274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.750301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.750409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.750434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.750524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.750555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.750648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.750675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.750788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.750814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.750904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.750929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.751045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.751069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.751148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.751173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.751255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.751280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.751390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.751414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.751517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.751568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.751705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.751745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.751892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.751920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.752003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.768 [2024-12-09 18:15:18.752029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.768 qpair failed and we were unable to recover it. 00:25:55.768 [2024-12-09 18:15:18.752122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.752149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.752263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.752294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.752414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.752440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.752599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.752638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.752765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.752793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.752908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.752934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.753131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.753156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.753269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.753296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.753383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.753411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.753512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.753624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.753742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.753781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.753902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.753929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.754037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.754063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.754203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.754229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.754342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.754368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.754530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.754576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.754670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.754698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.754788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.754814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.754957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.754997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.755112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.755138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.755231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.755259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.755346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.755374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.755504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.755553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.755655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.755683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.755773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.755800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.755884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.755909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.756020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.756046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.756165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.756191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.756304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.756336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.756421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.756448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.756576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.756604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.756687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.756712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.756822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.756847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.756956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.756982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.757061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.757088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.757193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.757218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.757312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.757351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.757453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.757492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.757607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.769 [2024-12-09 18:15:18.757636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.769 qpair failed and we were unable to recover it. 00:25:55.769 [2024-12-09 18:15:18.757727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.757753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.757943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.757968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.758112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.758162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.758252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.758277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.758391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.758417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.758527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.758557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.758751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.758777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.758865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.758892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.759009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.759035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.759121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.759147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.759227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.759254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.759400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.759426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.759539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.759572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.759764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.759790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.759902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.759929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.760041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.760067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.760216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.760242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.760336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.760362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.760474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.760499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.760633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.760673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.760795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.760821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.760932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.760958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.761068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.761093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.761234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.761259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.761359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.761398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.761524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.761556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.761670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.761698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.761789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.761815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.761956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.761982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.762165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.762218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.762301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.762327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.762423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.762452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.762543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.762579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.762659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.762685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.762791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.762817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.762903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.762928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.763047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.763072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.763163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.763191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.763321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.763360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.770 qpair failed and we were unable to recover it. 00:25:55.770 [2024-12-09 18:15:18.763508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.770 [2024-12-09 18:15:18.763536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.763660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.763686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.763771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.763796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.763913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.763939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.764058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.764084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.764199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.764226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.764339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.764365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.764465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.764493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.764604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.764631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.764717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.764742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.764824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.764849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.764967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.764994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.765138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.765164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.765280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.765308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.765399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.765425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.765540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.765572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.765678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.765704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.765803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.765830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.765920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.765945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.766028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.766056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.766138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.766163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.766274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.766299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.766580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.766607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.766692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.766717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.766830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.766856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.767012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.767037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.767129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.767157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.767280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.767306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.767434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.767461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.767568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.767595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.767707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.767738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.767849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.767874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.767984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.768009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.768121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.768146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.768235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.771 [2024-12-09 18:15:18.768261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:55.771 qpair failed and we were unable to recover it. 00:25:55.771 [2024-12-09 18:15:18.768338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.772 [2024-12-09 18:15:18.768364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:55.772 qpair failed and we were unable to recover it. 00:25:55.772 [2024-12-09 18:15:18.768450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.058 [2024-12-09 18:15:18.768489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.058 qpair failed and we were unable to recover it. 00:25:56.058 [2024-12-09 18:15:18.768595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.058 [2024-12-09 18:15:18.768624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.058 qpair failed and we were unable to recover it. 00:25:56.058 [2024-12-09 18:15:18.768705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.058 [2024-12-09 18:15:18.768731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.058 qpair failed and we were unable to recover it. 00:25:56.058 [2024-12-09 18:15:18.768842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.058 [2024-12-09 18:15:18.768868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.058 qpair failed and we were unable to recover it. 00:25:56.058 [2024-12-09 18:15:18.768983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.058 [2024-12-09 18:15:18.769010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.058 qpair failed and we were unable to recover it. 00:25:56.058 [2024-12-09 18:15:18.769098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.058 [2024-12-09 18:15:18.769125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.058 qpair failed and we were unable to recover it. 00:25:56.058 [2024-12-09 18:15:18.769241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.058 [2024-12-09 18:15:18.769268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.058 qpair failed and we were unable to recover it. 00:25:56.058 [2024-12-09 18:15:18.769377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.058 [2024-12-09 18:15:18.769415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.058 qpair failed and we were unable to recover it. 00:25:56.058 [2024-12-09 18:15:18.769517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.058 [2024-12-09 18:15:18.769552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.058 qpair failed and we were unable to recover it. 00:25:56.058 [2024-12-09 18:15:18.769643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.769669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.769755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.769781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.769890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.769916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.770050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.770098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.770209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.770234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.770389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.770419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.770513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.770541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.770639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.770665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.770803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.770829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.770916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.770942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.771031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.771057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.771148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.771174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.771289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.771317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.771414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.771454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.771568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.771596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.771785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.771811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.771890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.771915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.772022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.772048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.772127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.772153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.772256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.772282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.772376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.772415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.772567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.772595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.772711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.772737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.772850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.772876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.772964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.772990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.773077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.773107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.773223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.773249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.773344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.773384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.773473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.773505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.773605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.773633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.773728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.773754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.773952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.774001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.774146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.774197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.774312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.774340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.774454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.774479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.774603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.774642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.774736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.774763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.774844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.774869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.774976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.059 [2024-12-09 18:15:18.775000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.059 qpair failed and we were unable to recover it. 00:25:56.059 [2024-12-09 18:15:18.775085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.775111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.775210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.775234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.775350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.775374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.775492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.775517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.775610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.775635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.775723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.775748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.775829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.775853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.775964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.775989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.776068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.776092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.776207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.776231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.776343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.776367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.776478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.776503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.776629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.776655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.776755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.776798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.776926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.776954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.777069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.777093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.777178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.777203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.777312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.777337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.777451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.777475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.777618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.777644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.777728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.777753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.777834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.777860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.778002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.778028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.778113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.778139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.778244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.778271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.778419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.778445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.778532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.778564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.778658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.778684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.778769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.778795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.778905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.778930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.779047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.779073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.779181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.779207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.779307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.779346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.779465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.779492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.779605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.779643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.779754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.779791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.779908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.779944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.780089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.780124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.780276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.060 [2024-12-09 18:15:18.780303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.060 qpair failed and we were unable to recover it. 00:25:56.060 [2024-12-09 18:15:18.780416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.780445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.780569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.780607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.780729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.780756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.780855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.780881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.780966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.780992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.781103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.781138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.781235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.781262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.781406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.781435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.781555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.781583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.781780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.781807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.781919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.781944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.782060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.782087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.782172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.782197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.782302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.782342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.782449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.782498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.782601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.782629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.782746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.782772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.782853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.782880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.783022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.783048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.783188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.783214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.783305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.783334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.783450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.783476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.783571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.783599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.783689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.783715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.783806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.783831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.783916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.783942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.784058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.784084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.784194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.784220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.784314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.784340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.784454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.784482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.784562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.784588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.784694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.784720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.784800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.784826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.784950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.784976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.785060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.785086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.785175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.785202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.785310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.785336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.785422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.785449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.785571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.061 [2024-12-09 18:15:18.785597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.061 qpair failed and we were unable to recover it. 00:25:56.061 [2024-12-09 18:15:18.785709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.785735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.785820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.785845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.785955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.785982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.786066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.786092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.786182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.786207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.786291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.786317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.786399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.786424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.786627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.786656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.786796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.786822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.786916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.786941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.787055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.787087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.787172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.787198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.787321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.787359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.787457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.787484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.787580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.787608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.787728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.787759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.787875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.787901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.787987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.788013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.788122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.788147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.788228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.788254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.788364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.788391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.788539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.788583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.788674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.788700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.788813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.788839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.788919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.788946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.789032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.789058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.789163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.789216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.789331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.789357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.789445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.789471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.789586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.789612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.789725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.789750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.789834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.789859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.789939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.789965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.790055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.790081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.790192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.790218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.790331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.790358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.790473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.790499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.790593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.790619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.062 [2024-12-09 18:15:18.790700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.062 [2024-12-09 18:15:18.790725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.062 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.790835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.790861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.790972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.790997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.791082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.791108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.791224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.791252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.791334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.791361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.791484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.791511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.791618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.791644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.791733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.791759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.791867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.791892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.791978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.792004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.792094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.792120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.792208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.792234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.792343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.792370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.792516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.792564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.792672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.792700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.792815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.792842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.792942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.792972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.793084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.793130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.793247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.793274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.793367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.793392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.793502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.793528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.793622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.793648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.793784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.793810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.793928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.793954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.794062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.794087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.794171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.794196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.794311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.794336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.794422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.794448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.794566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.794593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.794691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.794718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.794810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.794836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.794929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.794955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.063 [2024-12-09 18:15:18.795046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-12-09 18:15:18.795071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.063 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.795185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.795211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.795322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.795348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.795438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.795463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.795551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.795577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.795660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.795687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.795774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.795801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.795880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.795906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.796020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.796046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.796139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.796165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.796358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.796383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.796478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.796518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.796622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.796650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.796750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.796775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.796874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.796900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.797023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.797049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.797134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.797160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.797280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.797307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.797419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.797445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.797634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.797660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.797746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.797772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.797851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.797878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.798003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.798028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.798122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.798148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.798238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.798268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.798377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.798403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.798493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.798519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.798649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.798687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.798795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.798831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.798960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.798988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.799065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.799091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.799176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.799202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.799346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.799372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.799566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.799594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.799706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.799732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.799815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.799841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.799955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.799980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.800090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.800115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.800310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-12-09 18:15:18.800335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.064 qpair failed and we were unable to recover it. 00:25:56.064 [2024-12-09 18:15:18.800430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.800469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.800596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.800625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.800718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.800745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.800890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.800918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.801036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.801061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.801142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.801168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.801257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.801282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.801367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.801392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.801475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.801501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.801618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.801647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.801790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.801816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.801900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.801927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.802034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.802064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.802218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.802257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.802387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.802414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.802532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.802564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.802650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.802675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.802791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.802817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.802920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.802945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.803029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.803055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.803139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.803166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.803246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.803272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.803398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.803425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.803519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.803556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.803648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.803674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.803760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.803786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.803905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.803931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.804009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.804035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.804119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.804145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.804253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.804278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.804394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.804420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.804507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.804534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.804636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.804662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.804761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.804787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.804870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.804896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.805009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.805035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.805115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.805141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.805227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.805254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.805341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.805366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.065 qpair failed and we were unable to recover it. 00:25:56.065 [2024-12-09 18:15:18.805486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-12-09 18:15:18.805511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.805605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.805632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.805715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.805740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.805826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.805852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.805972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.806000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.806096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.806123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.806205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.806232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.806315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.806340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.806455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.806481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.806568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.806595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.806680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.806706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.806793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.806819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.806926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.806951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.807042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.807072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.807174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.807212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.807307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.807334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.807420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.807446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.807526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.807562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.807683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.807709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.807807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.807840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.807966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.807992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.808072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.808097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.808193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.808219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.808307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.808332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.808411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.808435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.808552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.808580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.808697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.808723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.808816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.808847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.809040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.809066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.809147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.809174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.809285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.809311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.809430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.809456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.809576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.809611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.809706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.809733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.809830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.809855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.809931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.809957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.810037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.810064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.810148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.810175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.810295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.810321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.066 [2024-12-09 18:15:18.810405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-12-09 18:15:18.810430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.066 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.810552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.810579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.810666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.810692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.810772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.810797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.810876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.810902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.810983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.811011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.811124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.811150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.811244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.811271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.811354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.811379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.811461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.811486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.811580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.811606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.811686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.811711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.811799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.811824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.811931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.811956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.812070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.812099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.812212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.812238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.812445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.812485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.812576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.812604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.812721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.812749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.812829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.812856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.812945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.812971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.813164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.813191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.813279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.813306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.813426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.813454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.813571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.813598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.813688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.813714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.813815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.813846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.813963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.813989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.814102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.814131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.814213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.814240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.814361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.814386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.814472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.814499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.814590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.814617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.814700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.814726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.814814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.814841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.067 qpair failed and we were unable to recover it. 00:25:56.067 [2024-12-09 18:15:18.814927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-12-09 18:15:18.814955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.815047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.815073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.815157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.815183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.815275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.815301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.815386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.815411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.815505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.815531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.815623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.815654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.815734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.815760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.815842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.815867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.815956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.815980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.816119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.816144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.816218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.816243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.816328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.816356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.816446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.816472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.816569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.816596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.816681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.816708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.816823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.816848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.816931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.816956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.817071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.817098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.817207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.817233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.817351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.817378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.817460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.817487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.817579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.817606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.817720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.817746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.817825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.817850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.817944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.817970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.818081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.818108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.818199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.818226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.818310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.818335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.818422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.818448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.818532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.818566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.818655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.818682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.818767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.818793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.818910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.818940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.819024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.819050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.819161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.819187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.819284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.819309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.819387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.819413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.819487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.819513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.068 [2024-12-09 18:15:18.819607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-12-09 18:15:18.819635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.068 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.819720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.819746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.819822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.819847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.819926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.819950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.820066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.820091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.820180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.820204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.820291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.820318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.820389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.820415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.820502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.820528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.820635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.820662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.820743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.820769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.820872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.820898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.821015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.821041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.821155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.821181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.821268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.821296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.821393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.821419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.821529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.821562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.821660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.821693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.821840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.821886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.821984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.822019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.822130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.822156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.822245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.822273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.822363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.822388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.822499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.822525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.822643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.822668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.822784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.822809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.822888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.822914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.822993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.823020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.823103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.823129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.823216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.823242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.823333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.823358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.823453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.823480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.823602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.823628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.823738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.823763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.823870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.823900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.823988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.824014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.824105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.824132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.824208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.824234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.824355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.824380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.824494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-12-09 18:15:18.824519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.069 qpair failed and we were unable to recover it. 00:25:56.069 [2024-12-09 18:15:18.824614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.824639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.824746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.824771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.824863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.824887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.825027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.825052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.825130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.825155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.825238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.825262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.825347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.825371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.825451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.825475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.825591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.825619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.825707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.825733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.825822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.825861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.825958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.825985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.826101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.826127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.826245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.826270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.826351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.826376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.826482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.826508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.826599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.826626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.826711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.826741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.826823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.826849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.826960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.826986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.827068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.827095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.827184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.827216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.827295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.827322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.827399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.827428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.827501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.827527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.827627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.827653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.827736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.827762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.827874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.827901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.828020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.828047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.828153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.828188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.828346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.828394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.828510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.828536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.828653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.828679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.828795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.828820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.828901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.828926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.829019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.829047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.829186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.829213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.829345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.829384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.829477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.829503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.070 qpair failed and we were unable to recover it. 00:25:56.070 [2024-12-09 18:15:18.829592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-12-09 18:15:18.829618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.829697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.829722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.829835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.829860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.829947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.829972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.830064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.830088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.830211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.830240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.830328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.830355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.830442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.830469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.830581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.830607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.830721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.830752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.830872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.830897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.831009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.831035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.831123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.831150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.831258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.831283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.831372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.831399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.831481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.831506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.831622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.831648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.831719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.831743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.831839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.831863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.831945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.831970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.832088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.832115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.832192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.832218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.832328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.832356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.832485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.832511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.832602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.832628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.832714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.832739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.832817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.832841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.832952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.832977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.833064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.833090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.833173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.833201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.833287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.833314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.833428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.833456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.833573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.833599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.833689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.833715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.833853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.833879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.833966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.833993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.834082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.834108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.834249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.834275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.834389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.834415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.834493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-12-09 18:15:18.834520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.071 qpair failed and we were unable to recover it. 00:25:56.071 [2024-12-09 18:15:18.834646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.834673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.834789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.834815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.834898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.834923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.835036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.835062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.835187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.835215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.835323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.835349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.835465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.835505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.835605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.835632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.835753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.835779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.835890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.835937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.836063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.836089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.836202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.836228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.836349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.836375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.836494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.836521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.836638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.836665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.836779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.836806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.836897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.836923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.836999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.837025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.837104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.837130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.837216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.837242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.837324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.837349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.837431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.837455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.837539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.837571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.837657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.837682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.837763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.837787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.837869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.837894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.838006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.838031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.838113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.838138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.838225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.838251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.838362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.838387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.838477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.838505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.838616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.838656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.838756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.838783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.838919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-12-09 18:15:18.838945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.072 qpair failed and we were unable to recover it. 00:25:56.072 [2024-12-09 18:15:18.839059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.839085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.839201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.839227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.839309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.839341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.839483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.839509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.839634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.839660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.839747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.839773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.839910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.839950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.840071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.840098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.840192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.840218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.840309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.840334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.840447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.840473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.840554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.840580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.840666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.840693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.840810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.840836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.841009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.841059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.841161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.841194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.841329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.841356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.841446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.841473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.841558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.841585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.841673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.841698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.841787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.841813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.841893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.841919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.842060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.842086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.842199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.842225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.842341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.842368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.842483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.842508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.842607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.842632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.842727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.842752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.842852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.842899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.842981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.843011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.843099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.843125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.843220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.843245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.843332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.843358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.843472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.843498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.843607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.843633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.843742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.843768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.843878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.843903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.844018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.844044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.844140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-12-09 18:15:18.844179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.073 qpair failed and we were unable to recover it. 00:25:56.073 [2024-12-09 18:15:18.844300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.844328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.844470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.844496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.844608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.844635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.844709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.844735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.844830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.844856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.844938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.844965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.845058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.845088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.845181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.845208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.845305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.845331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.845418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.845444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.845559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.845586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.845700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.845727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.845869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.845895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.845986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.846013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.846103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.846129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.846242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.846269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.846381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.846406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.846490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.846517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.846645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.846671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.846783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.846808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.846894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.846925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.847009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.847036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.847149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.847175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.847304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.847330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.847409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.847435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.847550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.847577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.847658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.847684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.847770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.847796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.847907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.847933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.848041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.848066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.848149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.848181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.848296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.848322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.848448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.848474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.848557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.848584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.848680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.848705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.848820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.848846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.848928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.848954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.849073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.849099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.849200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.849238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.074 qpair failed and we were unable to recover it. 00:25:56.074 [2024-12-09 18:15:18.849326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-12-09 18:15:18.849353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.849436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.849462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.849573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.849600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.849725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.849751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.849861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.849886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.849980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.850007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.850126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.850152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.850233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.850258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.850398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.850424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.850509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.850535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.850642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.850667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.850754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.850780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.850872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.850897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.851007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.851032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.851114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.851140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.851271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.851310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.851431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.851458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.851555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.851584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.851672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.851704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.851787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.851813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.851928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.851954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.852073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.852100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.852190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.852215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.852299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.852324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.852415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.852441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.852531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.852563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.852703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.852729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.852839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.852864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.852975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.853000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.853116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.853142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.853228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.853253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.853333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.853359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.853468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.853507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.853613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.853641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.853728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.853753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.853835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.853861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.853975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.854001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.854126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.854151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.854242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.854269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.854397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-12-09 18:15:18.854436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.075 qpair failed and we were unable to recover it. 00:25:56.075 [2024-12-09 18:15:18.854560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.854587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.854680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.854706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.854821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.854846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.854925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.854951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.855040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.855067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.855159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.855185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.855275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.855302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.855393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.855419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.855532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.855565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.855652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.855678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.855791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.855817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.855959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.856003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.856129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.856175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.856287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.856313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.856401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.856426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.856540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.856579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.856684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.856711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.856907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.856934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.857071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.857102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.857182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.857207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.857290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.857315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.857405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.857431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.857513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.857538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.857634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.857661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.857759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.857785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.857895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.857922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.858039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.858065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.858171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.858197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.858281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.858306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.858409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.858434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.858507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.858533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.858637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.858665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.858783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.858809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.858901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.858927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.076 qpair failed and we were unable to recover it. 00:25:56.076 [2024-12-09 18:15:18.859012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-12-09 18:15:18.859038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.859152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.859177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.859320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.859346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.859463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.859489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.859608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.859635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.859729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.859755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.859843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.859871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.859951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.859977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.860067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.860092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.860202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.860228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.860334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.860359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.860510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.860557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.860654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.860683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.860766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.860794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.860915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.860942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.861046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.861081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.861205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.861249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.861371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.861398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.861480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.861509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.861615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.861641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.861733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.861759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.861858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.861883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.861969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.861995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.862108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.862134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.862224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.862257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.862350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.862376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.862488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.862515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.862642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.862668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.862754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.862780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.862918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.862944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.863031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.863058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.863147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.863172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.863287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.863312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.863393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.863417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.863500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.863525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.863616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.863646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.863771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.863798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.863879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.863905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.864048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.864073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.077 qpair failed and we were unable to recover it. 00:25:56.077 [2024-12-09 18:15:18.864190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-12-09 18:15:18.864216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.864300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.864325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.864435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.864463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.864578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.864604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.864694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.864720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.864800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.864829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.864967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.864992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.865080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.865105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.865178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.865204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.865284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.865309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.865398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.865424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.865541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.865575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.865671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.865700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.865815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.865842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.865990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.866016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.866141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.866168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.866283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.866319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.866467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.866494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.866647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.866674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.866764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.866790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.866871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.866897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.867037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.867063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.867148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.867175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.867257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.867283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.867360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.867385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.867461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.867492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.867619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.867646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.867729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.867756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.867830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.867856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.867970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.867996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.868082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.868110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.868222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.868247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.868361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.868388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.868476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.868502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.868612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.868638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.868751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.868776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.868865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.868893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.869011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.869037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.869134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.869160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.078 [2024-12-09 18:15:18.869250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-12-09 18:15:18.869275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.078 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.869361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.869386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.869477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.869504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.869610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.869637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.869730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.869754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.869836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.869862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.869980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.870005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.870121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.870146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.870223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.870249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.870363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.870389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.870463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.870488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.870595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.870621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.870706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.870732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.870859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.870897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.870985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.871013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.871128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.871156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.871240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.871266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.871382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.871408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.871484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.871510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.871633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.871660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.871799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.871844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.871943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.871977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.872136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.872182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.872295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.872321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.872413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.872441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.872532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.872565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.872678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.872704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.872790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.872833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.872955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.872999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.873098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.873131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.873238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.873270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.873384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.873409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.873488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.873514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.873610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.873636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.873724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.873750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.873869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.873902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.874014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.874060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.874232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.874292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.874411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.874439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.874553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-12-09 18:15:18.874580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.079 qpair failed and we were unable to recover it. 00:25:56.079 [2024-12-09 18:15:18.874702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.874728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.874806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.874831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.874923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.874948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.875054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.875086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.875202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.875226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.875399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.875432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.875538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.875571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.875654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.875679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.875765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.875790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.875871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.875896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.875990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.876022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.876134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.876159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.876323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.876351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.876466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.876492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.876585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.876612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.876736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.876761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.876892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.876939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.877069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.877095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.877178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.877203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.877318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.877344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.877434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.877460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.877579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.877605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.877721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.877746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.877846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.877871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.877960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.877985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.878070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.878094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.878202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.878227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.878325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.878350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.878425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.878449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.878534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.878566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.878694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.878722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.878836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.878862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.878945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.878971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.879053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.879079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.879160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.879185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.080 [2024-12-09 18:15:18.879270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-12-09 18:15:18.879296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.080 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.879382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.879409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.879519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.879554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.879640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.879667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.879765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.879790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.879905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.879937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.880131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.880157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.880246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.880273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.880389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.880415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.880501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.880528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.880637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.880663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.880758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.880784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.880867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.880892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.881005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.881030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.881110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.881135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.881215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.881240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.881323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.881347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.881432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.881456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.881562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.881587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.881714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.881739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.881840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.881870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.881974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.881998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.882152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.882183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.882293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.882318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.882432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.882457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.882570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.882596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.882687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.882713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.882796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.882822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.882935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.882960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.883075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.883106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.883300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.883332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.883435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.883460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.883574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.883605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.883697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.883722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.883811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.883836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.883988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.884013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.884131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.884162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.884296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.884326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.884502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.884528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.081 [2024-12-09 18:15:18.884668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-12-09 18:15:18.884707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.081 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.884803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.884831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.884960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.885007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.885171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.885218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.885299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.885325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.885430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.885456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.885557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.885584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.885674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.885699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.885783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.885808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.885885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.885932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.886044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.886084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.886213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.886261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.886341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.886366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.886453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.886478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.886565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.886592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.886674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.886699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.886786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.886834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.886937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.886968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.887059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.887091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.887217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.887249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.887350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.887399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.887507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.887532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.887628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.887654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.887736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.887760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.887924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.887956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.888148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.888180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.888344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.888379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.888506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.888532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.888663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.888688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.888776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.888804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.888975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.889020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.889212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.889246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.889389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.889425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.889583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.889609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.889713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.889752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.889868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.889896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.890031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.890078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.890202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.890229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.890325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.890351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.890434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.890460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.082 qpair failed and we were unable to recover it. 00:25:56.082 [2024-12-09 18:15:18.890556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-12-09 18:15:18.890583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.890672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.890697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.890789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.890814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.890953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.890978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.891069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.891095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.891205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.891230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.891313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.891338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.891429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.891459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.891570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.891596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.891685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.891711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.891803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.891828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.891964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.891995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.892162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.892210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.892374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.892419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.892529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.892561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.892664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.892690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.892781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.892806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.892932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.892979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.893095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.893121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.893216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.893242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.893396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.893436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.893533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.893565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.893676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.893701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.893842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.893866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.893980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.894005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.894119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.894144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.894240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.894272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.894474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.894513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.894649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.894679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.894766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.894794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.895010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.895056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.895205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.895253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.895368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.895395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.895490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.895517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.895639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.895671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.895757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.895783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.895866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.895894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.895985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.896012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.896127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.896154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.896248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-12-09 18:15:18.896274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.083 qpair failed and we were unable to recover it. 00:25:56.083 [2024-12-09 18:15:18.896354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.896381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.896520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.896552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.896636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.896662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.896781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.896807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.896922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.896948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.897060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.897086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.897171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.897198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.897318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.897357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.897462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.897490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.897606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.897634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.897743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.897793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.897934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.897960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.898096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.898144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.898232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.898259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.898350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.898376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.898485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.898510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.898609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.898639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.898739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.898778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.898872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.898919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.899057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.899090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.899230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.899273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.899354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.899380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.899473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.899502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.899626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.899655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.899791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.899840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.899948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.899983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.900148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.900190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.900299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.900325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.900417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.900443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.900554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.900580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.900663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.900690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.900773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.900800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.900885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.900911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.901028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.901054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.901144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.901176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.901257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.901285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.901374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.901400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.901483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.901509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.901631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-12-09 18:15:18.901658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.084 qpair failed and we were unable to recover it. 00:25:56.084 [2024-12-09 18:15:18.901788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.901814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.901900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.901927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.902008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.902035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.902152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.902178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.902289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.902315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.902401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.902427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.902505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.902531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.902629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.902656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.902748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.902774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.902867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.902894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.903041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.903067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.903159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.903185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.903277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.903303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.903406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.903444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.903529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.903563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.903656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.903683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.903820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.903869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.904011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.904057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.904144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.904170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.904245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.904271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.904362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.904390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.904509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.904535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.904630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.904665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.904772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.904814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.904953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.904986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.905098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.905150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.905280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.905316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.905421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.905446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.905523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.905554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.905644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.905670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.905810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.905858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.905937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.905963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.906078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.906103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.906242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.906268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.906348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-12-09 18:15:18.906375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.085 qpair failed and we were unable to recover it. 00:25:56.085 [2024-12-09 18:15:18.906468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.906496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.906605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.906632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.906742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.906776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.906926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.906974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.907062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.907088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.907198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.907224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.907312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.907339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.907446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.907471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.907585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.907611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.907731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.907757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.907837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.907863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.907966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.907999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.908146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.908183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.908284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.908310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.908408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.908435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.908530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.908562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.908671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.908697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.908777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.908803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.908926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.908959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.909058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.909091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.909270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.909323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.909462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.909488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.909579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.909606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.909722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.909748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.909861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.909908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.910052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.910100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.910193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.910219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.910333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.910364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.910478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.910504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.910599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.910626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.910740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.910766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.910955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.910981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.911126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.911152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.911231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.911256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.911343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.911370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.911482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.911508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.911630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.911656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.911797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.911823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.911924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-12-09 18:15:18.911958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.086 qpair failed and we were unable to recover it. 00:25:56.086 [2024-12-09 18:15:18.912097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.912123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.912214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.912240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.912363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.912389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.912481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.912508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.912599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.912626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.912740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.912767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.912957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.912983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.913095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.913121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.913207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.913233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.913319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.913345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.913429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.913456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.913569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.913595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.913678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.913704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.913823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.913849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.913938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.913963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.914053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.914079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.914189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.914215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.914306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.914332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.914447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.914473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.914586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.914613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.914706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.914732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.914811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.914837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.914922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.914948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.915056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.915082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.915163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.915189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.915306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.915332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.915424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.915450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.915566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.915593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.915736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.915766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.915880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.915906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.916010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.916036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.916154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.916181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.916267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.916293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.916411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.916450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.916555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.916583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.916701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.916727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.916812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.916838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.916927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.916955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.917054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.917095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.087 [2024-12-09 18:15:18.917238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-12-09 18:15:18.917266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.087 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.917355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.917381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.917470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.917495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.917598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.917625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.917715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.917741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.917825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.917851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.917966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.917992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.918079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.918104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.918221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.918247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.918336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.918362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.918441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.918466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.918556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.918585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.918700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.918727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.918835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.918861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.918961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.918986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.919061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.919087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.919207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.919246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.919337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.919365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.919484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.919510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.919636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.919661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.919750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.919776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.919866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.919892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.919978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.920003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.920089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.920116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.920202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.920229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.920341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.920368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.920479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.920506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.920593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.920621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.920741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.920768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.920863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.920894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.920980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.921006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.921089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.921115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.921222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.921248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.921331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.921356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.921433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.921459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.921564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.921590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.921675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.921701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.921807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.921833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.921958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.921984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.088 [2024-12-09 18:15:18.922058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-12-09 18:15:18.922084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.088 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.922171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.922199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.922319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.922345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.922462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.922488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.922581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.922608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.922722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.922749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.922863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.922889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.922967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.922993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.923107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.923132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.923251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.923277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.923366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.923393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.923476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.923502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.923630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.923656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.923741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.923767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.923851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.923879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.923999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.924025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.924110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.924137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.924283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.924315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.924517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.924550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.924744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.924772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.924864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.924893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.924981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.925026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.925135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.925162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.925250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.925278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.925365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.925391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.925480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.925507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.925601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.925627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.925712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.925738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.925827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.925853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.925964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.925989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.926111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.926138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.926230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.926257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.926406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.926432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.926524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.926556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.926642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-12-09 18:15:18.926669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.089 qpair failed and we were unable to recover it. 00:25:56.089 [2024-12-09 18:15:18.926754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.926780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.926875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.926901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.926983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.927008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.927090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.927116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.927203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.927229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.927344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.927370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.927504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.927530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.927640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.927678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.927764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.927793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.927912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.927938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.928055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.928082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.928188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.928213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.928318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.928356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.928476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.928503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.928621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.928660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.928757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.928783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.928940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.928973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.929084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.929119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.929286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.929318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.929423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.929470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.929587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.929615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.929770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.929798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.929948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.929978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.930089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.930114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.930334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.930367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.930474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.930499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.930623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.930650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.930743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.930768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.930881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.930906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.931021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.931046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.931179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.931226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.931400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.931426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.931503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.931528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.931638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.931664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.931778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.931804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.931921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.931946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.932094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.932120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.932304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.932330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.932469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.932495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.090 qpair failed and we were unable to recover it. 00:25:56.090 [2024-12-09 18:15:18.932590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-12-09 18:15:18.932616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.932702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.932728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.932812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.932839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.932934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.932959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.933103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.933128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.933215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.933243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.933339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.933365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.933477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.933504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.933634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.933661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.933743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.933768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.933880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.933908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.934015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.934060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.934236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.934285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.934378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.934405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.934509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.934535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.934661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.934687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.934803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.934829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.934915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.934941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.935056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.935088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.935177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.935203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.935343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.935369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.935455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.935480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.935592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.935619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.935702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.935733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.935828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.935855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.935997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.936023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.936133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.936159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.936277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.936302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.936440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.936466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.936576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.936603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.936715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.936741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.936837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.936863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.936972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.936998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.937113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.937139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.937287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.937316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.937429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.937454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.937541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.937572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.937677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.937703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.937790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.937816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.091 [2024-12-09 18:15:18.937933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-12-09 18:15:18.937959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.091 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.938051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.938078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.938172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.938217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.938389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.938416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.938538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.938580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.938669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.938695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.938808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.938843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.939013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.939058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.939164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.939212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.939368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.939393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.939479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.939505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.939652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.939680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.939770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.939796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.939881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.939908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.940039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.940072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.940216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.940249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.940441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.940474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.940587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.940632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.940748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.940774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.940902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.940936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.941074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.941108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.941222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.941257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.941426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.941478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.941657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.941686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.941824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.941869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.941980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.942015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.942172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.942221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.942421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.942467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.942593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.942619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.942704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.942730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.942838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.942864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.942950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.942976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.943065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.943090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.943201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.943226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.943366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.943392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.943479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.943504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.943613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.943641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.943767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.943794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.943915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.943941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.944024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-12-09 18:15:18.944051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.092 qpair failed and we were unable to recover it. 00:25:56.092 [2024-12-09 18:15:18.944170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.944196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.944308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.944334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.944413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.944439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.944531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.944564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.944678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.944704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.944800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.944826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.944908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.944933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.945015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.945041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.945152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.945178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.945321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.945346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.945429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.945456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.945554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.945594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.945713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.945752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.945879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.945906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.946020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.946046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.946185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.946211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.946302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.946327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.946464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.946489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.946582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.946610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.946768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.946794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.946928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.946974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.947095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.947141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.947261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.947286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.947408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.947436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.947537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.947584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.947689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.947715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.947803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.947851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.948045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.948080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.948248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.948283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.948433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.948468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.948637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.948663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.948747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.948791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.948932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.948967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.949093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.949127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.949234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.949270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.949454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.949482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.949582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.949621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.949747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.949774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.949864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.093 [2024-12-09 18:15:18.949891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.093 qpair failed and we were unable to recover it. 00:25:56.093 [2024-12-09 18:15:18.950009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.950056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.950188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.950236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.950350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.950375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.950488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.950515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.950637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.950664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.950754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.950780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.950864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.950890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.951002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.951028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.951108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.951135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.951228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.951255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.951397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.951423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.951509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.951536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.951630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.951663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.951779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.951806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.951920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.951946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.952034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.952059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.952134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.952158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.952272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.952297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.952420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.952445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.952542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.952573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.952686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.952712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.952798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.952823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.952908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.952935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.953067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.953102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.953314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.953348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.953518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.953543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.953639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.953665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.953744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.953769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.953929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.953962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.954163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.954196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.954288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.954320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.954487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.954542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.954652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.954677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.954761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.954787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.954879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.094 [2024-12-09 18:15:18.954929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.094 qpair failed and we were unable to recover it. 00:25:56.094 [2024-12-09 18:15:18.955063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.955096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.955208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.955237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.955321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.955347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.955459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.955485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.955580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.955607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.955745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.955771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.955855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.955881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.955963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.955988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.956068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.956094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.956176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.956202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.956282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.956309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.956421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.956447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.956527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.956559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.956643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.956669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.956778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.956804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.956924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bcf30 is same with the state(6) to be set 00:25:56.095 [2024-12-09 18:15:18.957075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.957113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.957238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.957266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.957415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.957442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.957555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.957589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.957698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.957723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.957844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.957869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.957965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.957998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.958135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.958167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.958334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.958367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.958472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.958497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.958623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.958649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.958732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.958775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.958980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.959013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.959106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.959138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.959274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.959319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.959464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.959492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.959623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.959649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.959779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.959818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.959966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.959993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.960084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.960110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.960282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.960330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.960445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.960471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.960591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.960618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.095 [2024-12-09 18:15:18.960740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-12-09 18:15:18.960765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.095 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.960881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.960907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.961000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.961039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.961131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.961158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.961283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.961310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.961423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.961453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.961555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.961583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.961668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.961694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.961801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.961827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.961917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.961944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.962050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.962076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.962157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.962185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.962268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.962295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.962450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.962489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.962615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.962642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.962733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.962759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.962879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.962905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.962992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.963017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.963131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.963156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.963273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.963299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.963389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.963429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.963515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.963543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.963634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.963661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.963772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.963797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.963931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.963978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.964084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.964130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.964239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.964265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.964406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.964434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.964548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.964575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.964702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.964729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.964807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.964832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.964913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.964938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.965022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.965053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.965192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.965241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.965363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.965389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.965507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.965532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.965635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.965661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.965769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.965819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.965962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.966011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.966155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-12-09 18:15:18.966207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.096 qpair failed and we were unable to recover it. 00:25:56.096 [2024-12-09 18:15:18.966315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.966341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.966433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.966458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.966535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.966568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.966692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.966717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.966830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.966856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.966971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.966996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.967115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.967140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.967226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.967255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.967372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.967400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.967497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.967536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.967649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.967678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.967772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.967798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.967921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.967957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.968073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.968109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.968260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.968295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.968406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.968432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.968542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.968573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.968660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.968687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.968805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.968831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.968990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.969027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.969186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.969220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.969389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.969442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.969571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.969598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.969719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.969745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.969822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.969848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.969995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.970042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.970212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.970262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.970377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.970403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.970531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.970575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.970680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.970706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.970840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.970875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.971082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.971116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.971288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.971330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.971440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.971489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.971627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.971667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.971806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.971845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.971966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.971994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.972106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.972153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.972244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-12-09 18:15:18.972269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.097 qpair failed and we were unable to recover it. 00:25:56.097 [2024-12-09 18:15:18.972387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.972413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.972497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.972522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.972667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.972706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.972860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.972888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.972980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.973005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.973114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.973139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.973227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.973252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.973349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.973377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.973488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.973515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.973642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.973669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.973763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.973791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.973952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.973999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.974089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.974115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.974267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.974303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.974447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.974492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.974595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.974634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.974751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.974778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.974867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.974894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.975022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.975071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.975219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.975263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.975389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.975420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.975556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.975583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.975729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.975778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.975955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.976002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.976147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.976196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.976312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.976338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.976451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.976478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.976567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.976594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.976708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.976736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.976847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.976872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.976983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.977009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.977104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.977130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.977212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.977238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.977386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.977412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.977501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.977527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.098 qpair failed and we were unable to recover it. 00:25:56.098 [2024-12-09 18:15:18.977668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-12-09 18:15:18.977707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.977825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.977852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.977968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.977994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.978132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.978177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.978302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.978340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.978442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.978481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.978576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.978604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.978787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.978834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.978920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.978948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.979059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.979107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.979249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.979285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.979426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.979459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.979638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.979663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.979775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.979799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.979958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.979981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.980211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.980244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.980387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.980419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.980538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.980568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.980660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.980684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.980801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.980825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.980927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.980959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.981163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.981197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.981374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.981422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.981539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.981574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.981709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.981734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.981910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.981967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.982143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.982176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.982336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.982380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.982466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.982490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.982652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.982692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.982809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.982852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.983046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.983080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.983228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.983262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.983424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.983463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.983595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.983623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.983708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.983734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.983820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.983847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.983966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.983992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.984112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.984138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.984235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-12-09 18:15:18.984262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.099 qpair failed and we were unable to recover it. 00:25:56.099 [2024-12-09 18:15:18.984350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.984377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.984505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.984553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.984682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.984709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.984800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.984826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.984934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.984966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.985094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.985140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.985226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.985251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.985338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.985363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.985451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.985476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.985592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.985619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.985706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.985731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.985810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.985835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.985944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.985986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.986084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.986113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.986232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.986258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.986342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.986368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.986450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.986475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.986593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.986620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.986740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.986767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.986859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.986885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.986974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.987000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.987086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.987112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.987201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.987227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.987330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.987370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.987463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.987489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.987578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.987605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.987718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.987743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.987849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.987898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.988006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.988057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.988208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.988234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.988327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.988353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.988440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.988473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.988581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.988609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.988722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.988747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.988827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.988853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.988984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.989033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.989164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.989213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.989297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.989324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.989407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-12-09 18:15:18.989433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.100 qpair failed and we were unable to recover it. 00:25:56.100 [2024-12-09 18:15:18.989525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.989558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.989648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.989674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.989818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.989844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.989928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.989955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.990045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.990071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.990186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.990213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.990300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.990327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.990448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.990475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.990594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.990621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.990704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.990730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.990810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.990836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.990923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.990949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.991030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.991056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.991139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.991170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.991311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.991336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.991421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.991447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.991565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.991593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.991709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.991735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.991847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.991874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.991991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.992017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.992135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.992161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.992268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.992294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.992406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.992431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.992507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.992533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.992655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.992681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.992772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.992798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.992878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.992904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.993024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.993050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.993172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.993199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.993297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.993323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.993438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.993464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.993572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.993598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.993703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.993729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.993810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.993835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.993929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.993955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.994074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.994099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.994178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.994204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.994287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.994312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.994422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.994447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.994563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.994589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.994674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.994700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.101 [2024-12-09 18:15:18.994790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-12-09 18:15:18.994816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.101 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.994902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.994927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.995020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.995045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.995147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.995179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.995283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.995321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.995421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.995449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.995538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.995570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.995650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.995676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.995752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.995778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.995901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.995946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.996026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.996052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.996136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.996163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.996277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.996309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.996434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.996464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.996610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.996648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.996745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.996772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.996864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.996890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.996978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.997004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.997084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.997110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.997219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.997246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.997337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.997367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.997562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.997591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.997684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.997711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.997803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.997834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.997927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.997952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.998087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.998137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.998222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.998249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.998390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.998416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.998531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.998570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.998686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.998711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.998807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.998832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.998974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.999020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.999157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.999183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.999292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.999318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.999429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.999455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.999540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.999574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.999684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.999710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.999832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:18.999860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.102 qpair failed and we were unable to recover it. 00:25:56.102 [2024-12-09 18:15:18.999979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-12-09 18:15:19.000004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.000091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.000117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.000208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.000234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.000361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.000401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.000532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.000585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.000717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.000747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.000892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.000918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.001013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.001042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.001165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.001190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.001301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.001349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.001497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.001523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.001615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.001641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.001728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.001754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.001868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.001894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.002013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.002044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.002124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.002151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.002262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.002288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.002376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.002402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.002492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.002518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.002615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.002641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.002728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.002754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.002865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.002891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.003032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.003058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.003153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.003179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.003292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.003317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.003403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.003430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.003513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.003540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.003636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.003662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.003753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.003779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.003866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.003891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.004001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.004027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.103 qpair failed and we were unable to recover it. 00:25:56.103 [2024-12-09 18:15:19.004171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-12-09 18:15:19.004197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.004339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.004365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.004491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.004516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.004651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.004691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.004813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.004841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.004991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.005040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.005181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.005237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.005340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.005387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.005540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.005597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.005755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.005805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.005909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.005944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.006072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.006120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.006233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.006259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.006376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.006402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.006496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.006522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.006620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.006649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.006763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.006800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.006897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.006925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.007009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.007036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.007121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.007169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.007287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.007320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.007455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.007481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.007570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.007596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.007682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.007713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.007798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.007846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.007983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.008016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.008127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.008161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.008266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.008312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.008422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.008447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.008532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.008567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.008679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.008704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.008819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.008847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.008956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.009004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.009139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.009187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.009326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.009370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.009454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.009481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.009578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.009607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.009725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.009751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.009842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.009868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.010019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.010045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.010140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.010167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.010256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.010282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.010368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.010394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.010482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.010507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.010601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.010628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.010740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-12-09 18:15:19.010765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.104 qpair failed and we were unable to recover it. 00:25:56.104 [2024-12-09 18:15:19.010845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.010871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.011000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.011032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.011192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.011225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.011344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.011395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.011528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.011571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.011691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.011717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.011811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.011836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.011993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.012040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.012148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.012195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.012279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.012305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.012411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.012436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.012551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.012577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.012731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.012756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.012901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.012934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.013039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.013064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.013227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.013260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.013372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.013414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.013503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.013529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.013625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.013653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.013769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.013796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.013942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.013968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.014082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.014129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.014220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.014246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.014362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.014387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.014477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.014504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.014622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.014649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.014730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.014756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.014849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.014875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.015011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.015037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.015152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.015178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.015290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.015319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.015416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.015456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.015565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.015604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.015704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.015731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.015871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.015918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.016046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.016080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.016211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.016239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.016383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.016409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.016523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.016557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.016648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.016674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.016786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.016812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.016897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.016923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.017039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.017064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.017150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.017176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.017285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.017312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.017407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-12-09 18:15:19.017433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.105 qpair failed and we were unable to recover it. 00:25:56.105 [2024-12-09 18:15:19.017552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.017579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.017669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.017695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.017785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.017810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.017892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.017919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.018014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.018040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.018144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.018183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.018272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.018299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.018381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.018407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.018503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.018528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.018680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.018734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.018853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.018899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.018986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.019013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.019124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.019150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.019274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.019301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.019421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.019449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.019549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.019576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.019714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.019740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.019844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.019879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.020002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.020050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.020162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.020188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.020279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.020305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.020398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.020423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.020513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.020539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.020689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.020715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.020828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.020854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.020934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.020965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.021075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.021100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.021215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.021240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.021323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.021348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.021420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.021445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.021531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.021563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.021645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.021671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.021786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.021814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.021897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.021923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.022040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.022066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.022177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.022202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.022281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.022307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.022386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.022412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.022538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.022573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.022672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.022698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.022784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.022810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.022893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.022919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.023008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.023034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.023145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.023171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.106 qpair failed and we were unable to recover it. 00:25:56.106 [2024-12-09 18:15:19.023311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-12-09 18:15:19.023336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.023411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.023437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.023557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.023583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.023699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.023726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.023808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.023835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.023926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.023952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.024067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.024094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.024181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.024207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.024304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.024330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.024419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.024444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.024561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.024588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.024710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.024735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.024864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.024891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.024984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.025011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.025096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.025122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.025203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.025229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.025338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.025384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.025530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.025575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.025673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.025700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.025784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.025811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.025951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.025977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.026063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.026094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.026212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.026238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.026351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.026377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.026452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.026478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.026592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.026619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.026736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.026762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.026846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.026871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.026951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.026977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.027059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.027085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.027201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.027227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.027313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.027339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.027450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.027476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.027590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.027617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.027736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.027762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.027888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.027914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.028000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.028026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.028145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.028173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.028262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.028294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.107 [2024-12-09 18:15:19.028382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-12-09 18:15:19.028410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.107 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.028557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.028585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.028719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.028753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.028907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.028941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.029120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.029155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.029295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.029331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.029466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.029492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.029591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.029619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.029730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.029765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.029921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.029981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.030161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.030210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.030350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.030385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.030491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.030517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.030614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.030641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.030749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.030775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.030890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.030916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.031008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.031034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.031126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.031152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.031272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.031300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.031419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.031445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.031529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.031569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.031667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.031693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.031806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.031837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.031926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.031952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.032060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.032087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.032165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.032191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.032335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.032363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.032453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.032478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.032571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.032598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.032719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.032744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.032849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.032884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.033008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.033055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.033198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.033244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.033361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.033386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.033469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.033496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.033610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.033637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.033773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.033807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.033937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.033963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.034108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.034134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.034249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.034274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.034351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.034377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.034494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.034522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.034649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.034677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.034803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.034842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.035024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.035059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.035200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.035247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.108 [2024-12-09 18:15:19.035419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-12-09 18:15:19.035452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.108 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.035602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.035631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.035715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.035740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.035847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.035885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.036041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.036076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.036266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.036300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.036411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.036437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.036532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.036565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.036678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.036703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.036801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.036826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.036941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.036974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.037082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.037108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.037261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.037293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.037397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.037430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.037608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.037647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.037746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.037775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.037917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.037962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.038138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.038187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.038301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.038326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.038399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.038425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.038542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.038575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.038707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.038759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.038906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.038934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.039050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.039076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.039193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.039219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.039338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.039380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.039511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.039536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.039744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.039770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.039891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.039924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.040125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.040158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.040273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.040344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.040512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.040541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.040653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.040692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.040797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.040826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.040995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.041045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.041181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.041226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.041428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.041466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.041672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.041701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.041816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.041841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.041956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.041983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.042132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.042164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.042331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.042364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.042510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.042535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.042628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.042659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.042745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.042771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.042909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.042942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.109 qpair failed and we were unable to recover it. 00:25:56.109 [2024-12-09 18:15:19.043077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-12-09 18:15:19.043112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.043315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.043348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.043490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.043516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.043656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.043695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.043797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.043843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.044006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.044039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.044169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.044201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.044422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.044455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.044572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.044599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.044691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.044717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.044809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.044848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.044970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.045021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.045160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.045196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.045343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.045377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.045518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.045550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.045643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.045668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.045809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.045860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.046073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.046106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.046295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.046345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.046459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.046485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.046602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.046628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.046712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.046738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.046838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.046871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.047016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.047041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.047120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.047150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.047289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.047339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.047422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.047448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.047555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.047582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.047674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.047699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.047812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.047837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.047920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.047946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.048072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.048098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.048183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.048208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.048317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.048345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.048462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.048488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.048634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.048660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.048780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.048824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.048969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.049002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.049144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.049193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.049327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.049360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.049480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-12-09 18:15:19.049506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.110 qpair failed and we were unable to recover it. 00:25:56.110 [2024-12-09 18:15:19.049599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.049625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.049709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.049734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.049843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.049877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.050015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.050061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.050275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.050308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.050425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.050452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.050588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.050628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.050749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.050776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.050881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.050928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.051035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.051060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.051200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.051238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.051362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.051389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.051523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.051569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.051691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.051719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.051814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.051841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.051959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.051986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.052096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.052122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.052238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.052265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.052378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.052404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.052496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.052525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.052612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.052638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.052778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.052804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.052946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.052971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.053098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.053133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.053225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.053253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.053395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.053421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.053533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.053568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.053677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.053703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.053791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.053817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.053894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.053920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.054026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.054060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.054172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.054198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.054288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.054314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.054419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.054444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.054590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.054616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.054760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.054785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.054895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.054921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.055037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.055063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.055154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.055180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.055286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.055311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.055420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.055446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.055564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.055590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.055673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.055699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.055781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.055807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.055889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.055914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.111 [2024-12-09 18:15:19.056021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-12-09 18:15:19.056047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.111 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.056132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.056158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.056246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.056273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.056399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.056438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.056559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.056587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.056687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.056726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.056846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.056895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.056981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.057007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.057129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.057164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.057294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.057320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.057437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.057462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.057577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.057604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.057724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.057749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.057862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.057888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.057981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.058008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.058093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.058118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.058246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.058285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.058405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.058433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.058555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.058589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.058704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.058730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.058843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.058870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.058951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.058977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.059134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.059181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.059295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.059322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.059450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.059479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.059565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.059593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.059737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.059772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.059876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.059910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.060023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.060059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.060234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.060268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.060385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.060419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.060562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.060588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.060749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.060774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.060873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.060916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.061098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.061133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.061274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.061309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.061439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.061473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.061568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.061613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.061704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.061732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.061819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.061847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.062022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.062069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.062207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.062256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.062334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.062359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.062477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.062516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.062615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.062642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.062720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.062749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.112 [2024-12-09 18:15:19.062825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-12-09 18:15:19.062850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.112 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.062980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.063014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.063216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.063250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.063415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.063440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.063555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.063584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.063710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.063736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.064045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.064096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.064231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.064278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.064395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.064420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.064516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.064541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.064666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.064692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.064840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.064886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.065020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.065054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.065192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.065218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.065352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.065377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.065585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.065625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.065748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.065776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.065864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.065889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.066001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.066026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.066117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.066143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.066249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.066274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.066406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.066453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.066581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.066620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.066713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.066740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.066813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.066839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.066953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.066990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.067129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.067172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.067373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.067408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.067557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.067583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.067698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.067724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.067804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.067830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.067972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.068018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.068160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.068195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.068336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.068370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.068501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.068540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.068668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.068697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.068784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.068811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.068914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.068950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.069133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.069180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.069354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.069403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.069521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.069554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.069657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.069696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.069841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.069869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.070033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.070069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.070227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.070273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.070419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.070445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.113 qpair failed and we were unable to recover it. 00:25:56.113 [2024-12-09 18:15:19.070565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-12-09 18:15:19.070592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.114 qpair failed and we were unable to recover it. 00:25:56.114 [2024-12-09 18:15:19.070669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.114 [2024-12-09 18:15:19.070694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.114 qpair failed and we were unable to recover it. 00:25:56.114 [2024-12-09 18:15:19.070832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.114 [2024-12-09 18:15:19.070857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.114 qpair failed and we were unable to recover it. 00:25:56.114 [2024-12-09 18:15:19.070936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.114 [2024-12-09 18:15:19.070961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.114 qpair failed and we were unable to recover it. 00:25:56.114 [2024-12-09 18:15:19.071042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.398 [2024-12-09 18:15:19.071067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.398 qpair failed and we were unable to recover it. 00:25:56.398 [2024-12-09 18:15:19.071188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.398 [2024-12-09 18:15:19.071223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.398 qpair failed and we were unable to recover it. 00:25:56.398 [2024-12-09 18:15:19.071366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.398 [2024-12-09 18:15:19.071410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.398 qpair failed and we were unable to recover it. 00:25:56.398 [2024-12-09 18:15:19.071534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.398 [2024-12-09 18:15:19.071587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.398 qpair failed and we were unable to recover it. 00:25:56.398 [2024-12-09 18:15:19.071706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.398 [2024-12-09 18:15:19.071734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.398 qpair failed and we were unable to recover it. 00:25:56.398 [2024-12-09 18:15:19.071872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.398 [2024-12-09 18:15:19.071899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.398 qpair failed and we were unable to recover it. 00:25:56.398 [2024-12-09 18:15:19.072005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.398 [2024-12-09 18:15:19.072056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.398 qpair failed and we were unable to recover it. 00:25:56.398 [2024-12-09 18:15:19.072193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.398 [2024-12-09 18:15:19.072247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.398 qpair failed and we were unable to recover it. 00:25:56.398 [2024-12-09 18:15:19.072370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.398 [2024-12-09 18:15:19.072408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.398 qpair failed and we were unable to recover it. 00:25:56.398 [2024-12-09 18:15:19.072528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.398 [2024-12-09 18:15:19.072565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.398 qpair failed and we were unable to recover it. 00:25:56.398 [2024-12-09 18:15:19.072659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.398 [2024-12-09 18:15:19.072685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.398 qpair failed and we were unable to recover it. 00:25:56.398 [2024-12-09 18:15:19.072775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.398 [2024-12-09 18:15:19.072802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.398 qpair failed and we were unable to recover it. 00:25:56.398 [2024-12-09 18:15:19.072885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.398 [2024-12-09 18:15:19.072911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.398 qpair failed and we were unable to recover it. 00:25:56.398 [2024-12-09 18:15:19.073027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.398 [2024-12-09 18:15:19.073062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.398 qpair failed and we were unable to recover it. 00:25:56.398 [2024-12-09 18:15:19.073237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.398 [2024-12-09 18:15:19.073271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.398 qpair failed and we were unable to recover it. 00:25:56.398 [2024-12-09 18:15:19.073418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.398 [2024-12-09 18:15:19.073453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.398 qpair failed and we were unable to recover it. 00:25:56.398 [2024-12-09 18:15:19.073567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.398 [2024-12-09 18:15:19.073599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.398 qpair failed and we were unable to recover it. 00:25:56.398 [2024-12-09 18:15:19.073699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.398 [2024-12-09 18:15:19.073725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.398 qpair failed and we were unable to recover it. 00:25:56.398 [2024-12-09 18:15:19.073839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.398 [2024-12-09 18:15:19.073864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.398 qpair failed and we were unable to recover it. 00:25:56.398 [2024-12-09 18:15:19.074040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.398 [2024-12-09 18:15:19.074075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.398 qpair failed and we were unable to recover it. 00:25:56.398 [2024-12-09 18:15:19.074247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.398 [2024-12-09 18:15:19.074282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.398 qpair failed and we were unable to recover it. 00:25:56.398 [2024-12-09 18:15:19.074427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.074462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.074687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.074713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.074806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.074834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.074923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.074950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.075124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.075170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.075254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.075280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.075394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.075420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.075505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.075531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.075631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.075658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.075805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.075831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.075952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.075978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.076085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.076111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.076217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.076242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.076330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.076355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.076434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.076459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.076555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.076582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.076695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.076721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.076808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.076857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.076970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.077005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.077185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.077246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.077334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.077361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.077444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.077470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.077591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.077619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.077727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.077753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.077885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.077910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.078088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.078114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.078348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.078384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.078520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.078564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.078664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.078690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.078800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.078826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.078937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.078962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.079111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.079147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.079253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.079299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.079407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.079443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.079586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.079613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.079756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.079786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.079973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.080009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.080169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.080204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.080341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.399 [2024-12-09 18:15:19.080375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.399 qpair failed and we were unable to recover it. 00:25:56.399 [2024-12-09 18:15:19.080562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.080607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.080697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.080723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.080861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.080886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.081022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.081047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.081189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.081235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.081350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.081394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.081508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.081533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.081629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.081655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.081759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.081785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.081871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.081897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.082001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.082046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.082192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.082228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.082347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.082382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.082552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.082591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.082713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.082741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.082854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.082880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.082986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.083021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.083181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.083206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.083332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.083358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.083501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.083526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.083637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.083676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.083797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.083835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.083959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.083986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.084100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.084127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.084243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.084268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.084346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.084372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.084482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.084507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.084602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.084628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.084738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.084764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.084878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.084903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.084990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.085016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.085122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.085162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.085280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.085308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.085464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.085503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.085623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.085650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.085788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.085814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.085960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.085991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.086075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.086102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.086179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.086208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.400 qpair failed and we were unable to recover it. 00:25:56.400 [2024-12-09 18:15:19.086326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.400 [2024-12-09 18:15:19.086352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.086480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.086520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.086643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.086670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.086796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.086839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.086992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.087027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.087152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.087197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.087340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.087375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.087486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.087527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.087655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.087680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.087810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.087845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.087991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.088025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.088157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.088192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.088375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.088425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.088539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.088571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.088666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.088705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.088794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.088821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.088988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.089015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.089156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.089203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.089347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.089374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.089501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.089540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.089668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.089696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.089849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.089875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.089990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.090015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.090129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.090155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.090288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.090340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.090449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.090475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.090641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.090680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.090799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.090827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.090917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.090943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.091085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.091111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.091249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.091275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.091384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.091410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.091532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.091567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.091720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.091749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.091874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.091900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.091994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.092020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.092162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.092197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.092339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.092377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.092558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.092584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.092682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.401 [2024-12-09 18:15:19.092707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.401 qpair failed and we were unable to recover it. 00:25:56.401 [2024-12-09 18:15:19.092822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.092847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.093009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.093044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.093162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.093206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.093347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.093381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.093524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.093555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.093649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.093674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.093757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.093782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.093926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.093994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.094110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.094160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.094311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.094360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.094481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.094507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.094660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.094687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.094802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.094828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.094939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.094966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.095069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.095095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.095177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.095202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.095284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.095309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.095420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.095445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.095528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.095565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.095678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.095705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.095813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.095852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.095952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.095979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.096069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.096097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.096241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.096287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.096416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.096463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.096582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.096610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.096703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.096729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.096867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.096903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.097058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.097097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.097228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.097256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.097373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.097400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.097516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.097542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.097643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.097669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.402 qpair failed and we were unable to recover it. 00:25:56.402 [2024-12-09 18:15:19.097781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.402 [2024-12-09 18:15:19.097831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.098007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.098052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.098141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.098166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.098273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.098300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.098416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.098443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.098531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.098567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.098686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.098712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.098825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.098850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.098963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.098990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.099169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.099215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.099361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.099414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.099505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.099533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.099656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.099683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.099791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.099841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.099948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.099996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.100107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.100133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.100247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.100272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.100350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.100375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.100452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.100479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.100596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.100623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.100713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.100739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.100828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.100877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.101031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.101066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.101174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.101209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.101330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.101359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.101503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.101528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.101638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.101678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.101818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.101856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.102056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.102091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.102246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.102281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.102392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.102474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.102640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.102679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.102784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.102811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.102947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.102996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.103167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.103217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.103327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.103353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.103467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.103494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.103613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.103640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.103739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.403 [2024-12-09 18:15:19.103769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.403 qpair failed and we were unable to recover it. 00:25:56.403 [2024-12-09 18:15:19.103914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.103940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.104062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.104087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.104203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.104228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.104338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.104364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.104440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.104466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.104556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.104582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.104678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.104703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.104811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.104835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.104925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.104953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.105094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.105119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.105224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.105250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.105359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.105384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.105481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.105520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.105653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.105680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.105771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.105797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.105937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.105963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.106097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.106144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.106292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.106338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.106454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.106479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.106601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.106637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.106765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.106793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.106885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.106910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.106999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.107024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.107136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.107161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.107258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.107286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.107403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.107430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.107514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.107539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.107635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.107660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.107774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.107800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.107883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.107909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.108020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.108047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.108165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.108192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.108307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.108333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.108483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.108509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.108631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.108658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.108740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.108765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.108903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.108940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.109047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.109090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.109209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.404 [2024-12-09 18:15:19.109246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.404 qpair failed and we were unable to recover it. 00:25:56.404 [2024-12-09 18:15:19.109394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.109431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.109570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.109613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.109702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.109727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.109809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.109836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.109979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.110027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.110140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.110189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.110301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.110327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.110465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.110504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.110604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.110631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.110758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.110784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.110971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.111014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.111136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.111172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.111327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.111363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.111476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.111501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.111600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.111626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.111739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.111764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.111906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.111942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.112091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.112127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.112256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.112283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.112471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.112507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.112665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.112691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.112813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.112859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.113068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.113105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.113290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.113326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.113522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.113600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.113676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.113701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.113784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.113808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.113947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.113979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.114185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.114221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.114381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.114417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.114520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.114571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.114730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.114755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.114873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.114898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.115009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.115034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.115269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.115328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.115447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.115475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.115566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.115594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.115710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.115737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.115869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.115918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.405 [2024-12-09 18:15:19.116063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.405 [2024-12-09 18:15:19.116107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.405 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.116211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.116250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.116410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.116446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.116595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.116621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.116732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.116756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.116886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.116922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.117041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.117065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.117270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.117322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.117458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.117497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.117671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.117702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.117842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.117879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.118034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.118081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.118265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.118316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.118406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.118433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.118571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.118610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.118750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.118789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.118941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.118981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.119132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.119169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.119313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.119365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.119502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.119541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.119694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.119733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.119832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.119858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.119983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.120028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.120134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.120170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.120318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.120354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.120492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.120559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.120715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.120745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.120836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.120863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.120967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.121003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.121123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.121174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.121311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.121337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.121490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.121528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.121644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.121673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.121802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.121840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.122024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.122083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.122234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.122293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.122398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.122424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.122512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.122537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.122655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.122680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.122764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.406 [2024-12-09 18:15:19.122790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.406 qpair failed and we were unable to recover it. 00:25:56.406 [2024-12-09 18:15:19.122905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.122930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.123019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.123045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.123162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.123188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.123277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.123304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.123402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.123431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.123568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.123607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.123742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.123781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.123876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.123904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.124082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.124130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.124273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.124320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.124403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.124429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.124558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.124585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.124740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.124766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.124924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.124974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.125115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.125163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.125248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.125273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.125355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.125381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.125487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.125513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.125612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.125638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.125733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.125761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.125848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.125874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.125959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.125985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.126072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.126100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.126191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.126217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.126332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.126358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.126447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.126472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.126563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.126590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.126704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.126729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.126846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.126872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.126995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.127021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.127106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.127132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.127225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.127252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.127394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.127420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.127501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.127527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.407 [2024-12-09 18:15:19.127622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.407 [2024-12-09 18:15:19.127648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.407 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.127736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.127767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.127862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.127888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.127997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.128023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.128115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.128141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.128254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.128279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.128394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.128419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.128510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.128536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.128623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.128649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.128734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.128760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.128844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.128869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.129013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.129039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.129120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.129146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.129227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.129252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.129334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.129359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.129454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.129480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.129592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.129618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.129697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.129723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.129808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.129833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.129926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.129951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.130034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.130059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.130173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.130198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.130287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.130312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.130422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.130448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.130534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.130570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.130662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.130688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.130981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.131008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.131142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.131168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.131275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.131313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.131402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.131429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.131523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.131558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.131655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.131682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.131793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.131819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.131892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.131918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.131997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.132023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.132099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.132124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.132253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.132292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.132379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.132407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.132517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.132552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.132637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.408 [2024-12-09 18:15:19.132663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.408 qpair failed and we were unable to recover it. 00:25:56.408 [2024-12-09 18:15:19.132796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.132847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.133019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.133079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.133176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.133244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.133357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.133382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.133472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.133499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.133592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.133620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.133717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.133743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.133829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.133855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.133985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.134057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.134161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.134187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.134271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.134298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.134426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.134451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.134561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.134587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.134700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.134726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.134810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.134835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.134954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.134979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.135068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.135094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.135209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.135234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.135353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.135379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.135455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.135480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.135574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.135602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.135693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.135720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.135811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.135837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.135943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.135969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.136051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.136077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.136166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.136192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.136307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.136333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.136457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.136495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.136602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.136640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.136759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.136787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.136880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.136907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.136998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.137034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.137137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.137164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.137280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.137331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.137450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.137476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.137597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.137624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.137769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.137817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.137903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.137930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.409 [2024-12-09 18:15:19.138024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.409 [2024-12-09 18:15:19.138050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.409 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.138163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.138188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.138285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.138323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.138420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.138446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.138567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.138593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.138714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.138739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.138840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.138876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.138998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.139043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.139172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.139208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.139363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.139388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.139501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.139527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.139624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.139651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.139736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.139761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.139854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.139891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.140053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.140099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.140174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.140200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.140280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.140306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.140397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.140424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.140512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.140537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.140654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.140679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.140770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.140795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.140892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.140928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.141059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.141106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.141221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.141262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.141348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.141373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.141478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.141503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.141598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.141627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.141738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.141764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.141867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.141901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.141999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.142024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.142200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.142253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.142374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.142399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.142512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.142538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.142624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.142649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.142738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.142763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.142845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.142915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.143051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.143121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.143255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.143324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.143495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.143520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.143612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.410 [2024-12-09 18:15:19.143640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.410 qpair failed and we were unable to recover it. 00:25:56.410 [2024-12-09 18:15:19.143733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.143759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.143904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.143939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.144043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.144076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.144200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.144234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.144350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.144377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.144466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.144492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.144624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.144663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.144752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.144780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.144871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.144898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.145004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.145057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.145204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.145255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.145350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.145376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.145463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.145489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.145574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.145601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.145679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.145705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.145797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.145825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.145911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.145937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.146076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.146109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.146217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.146243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.146331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.146360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.146497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.146523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.146623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.146650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.146762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.146796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.146946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.146980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.147098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.147175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.147320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.147395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.147540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.147570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.147685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.147710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.147802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.147829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.147973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.148021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.148123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.148172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.148344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.148410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.148496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.148523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.148654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.148682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.411 [2024-12-09 18:15:19.148797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.411 [2024-12-09 18:15:19.148832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.411 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.148940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.148974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.149121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.149163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.149316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.149350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.149501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.149526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.149657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.149683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.149765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.149791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.149880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.149905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.149980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.150030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.150187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.150246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.150432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.150505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.150689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.150714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.150801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.150827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.150977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.151011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.151125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.151149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.151271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.151305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.151482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.151516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.151657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.151696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.151820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.151847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.151986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.152029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.152130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.152165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.152325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.152383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.152504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.152529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.152621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.152647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.152746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.152813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.152991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.153068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.153272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.153335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.153485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.153510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.153622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.153647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.153755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.153780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.153885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.153909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.154026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.154060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.154196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.154239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.154385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.154424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.154538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.154574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.154661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.154689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.154817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.154843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.154951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.154982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.155093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.155143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.155232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.412 [2024-12-09 18:15:19.155258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.412 qpair failed and we were unable to recover it. 00:25:56.412 [2024-12-09 18:15:19.155345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.155371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.155505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.155550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.155673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.155700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.155791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.155817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.155936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.155961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.156072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.156111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.156294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.156331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.156479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.156506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.156601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.156628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.156714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.156740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.156878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.156903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.157002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.157029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.157123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.157149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.157269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.157296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.157384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.157411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.157499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.157526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.157647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.157674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.157766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.157792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.157880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.157905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.158016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.158044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.158127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.158154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.158237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.158263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.158380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.158405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.158491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.158516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.158637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.158675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.158768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.158795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.158968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.159014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.159167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.159213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.159323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.159370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.159517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.159542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.159645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.159671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.159785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.159811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.159946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.159992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.160082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.160109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.160213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.160279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.160409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.160449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.160538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.160576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.160690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.160720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.160809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.413 [2024-12-09 18:15:19.160858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.413 qpair failed and we were unable to recover it. 00:25:56.413 [2024-12-09 18:15:19.160977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.161002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.161213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.161249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.161369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.161395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.161480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.161506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.161621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.161647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.161761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.161786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.161931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.161967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.162089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.162126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.162310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.162363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.162519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.162566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.162661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.162689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.162776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.162802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.162954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.163000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.163112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.163161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.163256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.163281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.163403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.163428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.163518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.163553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.163645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.163670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.163783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.163810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.163898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.163926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.164043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.164069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.164156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.164182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.164279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.164305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.164399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.164424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.164508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.164535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.164626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.164662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.164759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.164785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.164899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.164924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.165039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.165065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.165185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.165211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.165307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.165345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.165472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.165500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.165590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.165617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.165704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.165730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.165819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.414 [2024-12-09 18:15:19.165845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.414 qpair failed and we were unable to recover it. 00:25:56.414 [2024-12-09 18:15:19.165923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.165948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.166040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.166065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.166181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.166207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.166321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.166347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.166438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.166464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.166580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.166608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.166763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.166802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.166894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.166921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.167007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.167033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.167125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.167150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.167261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.167286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.167374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.167400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.167484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.167509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.167621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.167661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.167755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.167783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.167876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.167902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.168048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.168093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.168324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.168365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.168493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.168521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.168616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.168643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.168758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.168783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.168870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.168895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.168977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.169003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.169108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.169142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.169330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.169393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.169485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.169513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.169650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.169678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.169791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.169817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.169914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.169949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.170119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.170167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.170266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.170327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.170510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.170555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.170667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.170692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.170816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.170842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.170931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.170976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.171176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.171210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.171368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.171396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.171519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.171550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.171642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.171669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.171755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.171781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.171878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.171926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.172018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.415 [2024-12-09 18:15:19.172043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.415 qpair failed and we were unable to recover it. 00:25:56.415 [2024-12-09 18:15:19.172122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.172147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.172237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.172262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.172377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.172416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.172504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.172532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.172630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.172656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.172769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.172795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.172871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.172896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.172982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.173007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.173145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.173179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.173292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.173320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.173410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.173438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.173527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.173564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.173703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.173750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.173831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.173857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.173935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.173961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.174098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.174141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.174281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.174315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.174435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.174463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.174561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.174589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.174693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.174743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.174824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.174849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.174964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.174990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.175107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.175132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.175246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.175273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.175398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.175426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.175505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.175530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.175634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.175661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.175743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.175769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.175881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.175906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.175985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.176010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.176116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.176166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.176291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.176340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.176423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.176449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.176565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.176591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.176706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.176731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.176817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.176843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.176929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.416 [2024-12-09 18:15:19.176955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.416 qpair failed and we were unable to recover it. 00:25:56.416 [2024-12-09 18:15:19.177035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.177060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.177150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.177176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.177291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.177317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.177410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.177436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.177555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.177584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.177673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.177705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.177794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.177820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.177932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.177958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.178066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.178093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.178172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.178198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.178314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.178340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.178451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.178477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.178572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.178598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.178703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.178728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.178836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.178883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.179001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.179035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.179146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.179195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.179314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.179341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.179454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.179479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.179610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.179636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.179746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.179772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.179896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.179922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.180008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.180034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.180126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.180153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.180266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.180292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.180385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.180413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.180506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.180531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.180653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.180678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.180764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.180789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.180893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.180944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.181040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.181090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.181209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.181257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.181385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.181411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.181509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.181534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.181644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.181670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.181776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.181811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.181990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.182024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.182169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.182203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.182316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.182343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.182455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.182480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.182596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.182623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.182733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.182768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.417 [2024-12-09 18:15:19.182895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.417 [2024-12-09 18:15:19.182920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.417 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.183036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.183062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.183144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.183170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.183259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.183285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.183375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.183401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.183551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.183578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.183690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.183715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.183823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.183851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.183943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.183969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.184048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.184074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.184186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.184211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.184297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.184325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.184405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.184430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.184555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.184581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.184697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.184731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.184841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.184877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.185059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.185106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.185245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.185293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.185435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.185461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.185554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.185580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.185723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.185766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.185944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.185991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.186108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.186145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.186260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.186287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.186375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.186400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.186480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.186506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.186649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.186696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.186807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.186856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.186973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.186998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.187085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.187111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.187200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.187230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.187319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.187346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.187440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.187466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.187578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.187604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.187723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.187748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.187876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.187901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.188020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.188045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.188130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.188156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.188273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.188297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.188414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.188439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.188515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.188542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.188632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.418 [2024-12-09 18:15:19.188658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.418 qpair failed and we were unable to recover it. 00:25:56.418 [2024-12-09 18:15:19.188765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.188814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.188950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.188994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.189127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.189164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.189300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.189326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.189412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.189437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.189578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.189604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.189677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.189702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.189787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.189812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.189900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.189925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.190042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.190068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.190185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.190212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.190296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.190321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.190409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.190434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.190513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.190537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.190625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.190650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.190757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.190789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.190965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.190990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.191103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.191137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.191249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.191285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.191403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.191429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.191507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.191531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.191630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.191655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.191787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.191822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.191944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.191978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.192093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.192127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.192248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.192301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.192424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.192462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.192583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.192612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.192734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.192761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.192921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.192969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.193070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.193107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.193208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.193234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.193345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.193371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.193483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.193509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.193644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.193672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.193792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.193841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.193978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.194013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.194172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.194219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.194297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.194322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.194436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.194461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.194542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.194577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.194696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.194722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.419 [2024-12-09 18:15:19.194868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.419 [2024-12-09 18:15:19.194894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.419 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.195001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.195028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.195120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.195146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.195270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.195296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.195416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.195443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.195551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.195590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.195698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.195726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.195842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.195868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.196002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.196036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.196184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.196218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.196339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.196373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.196520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.196553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.196648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.196674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.196762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.196788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.196880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.196929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.197048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.197082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.197205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.197258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.197359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.197386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.197507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.197532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.197628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.197654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.197761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.197787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.197899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.197924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.198008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.198034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.198211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.198246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.198395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.198429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.198541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.198573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.198663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.198689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.198803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.198828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.198962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.198997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.199134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.199169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.199274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.199308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.199451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.199476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.199585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.199611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.199726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.199751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.199852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.199886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.200021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.200066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.200214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.200248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.200409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.420 [2024-12-09 18:15:19.200448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.420 qpair failed and we were unable to recover it. 00:25:56.420 [2024-12-09 18:15:19.200555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.200583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.200676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.200702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.200790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.200816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.201005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.201053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.201172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.201221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.201336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.201362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.201467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.201507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.201642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.201670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.201785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.201811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.201914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.201950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.202078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.202103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.202247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.202273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.202361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.202386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.202474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.202499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.202592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.202618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.202702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.202727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.202841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.202866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.202962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.202987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.203065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.203114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.203250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.203284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.203403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.203429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.203519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.203556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.203674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.203699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.203780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.203805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.203923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.203958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.204068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.204110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.204243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.204272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.204424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.204463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.204596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.204625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.204712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.204744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.204886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.204933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.205043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.205090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.205219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.205268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.205359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.205387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.205479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.205506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.205597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.205624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.205708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.205734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.205834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.205868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.205989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.206031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.206181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.206223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.206375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.206409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.421 [2024-12-09 18:15:19.206557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.421 [2024-12-09 18:15:19.206583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.421 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.206697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.206723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.206818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.206869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.206977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.207002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.207157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.207191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.207341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.207375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.207508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.207555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.207648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.207676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.207764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.207790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.207901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.207928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.208034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.208059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.208144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.208170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.208267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.208293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.208381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.208406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.208494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.208519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.208641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.208671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.208752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.208777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.208888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.208913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.209045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.209080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.209197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.209240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.209384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.209418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.209541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.209573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.209666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.209691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.209770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.209795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.209880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.209906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.210115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.210149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.210302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.210336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.210444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.210469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.210553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.210579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.210661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.210687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.210776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.210802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.210975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.211009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.211110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.211144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-12-09 18:15:19.211251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.422 [2024-12-09 18:15:19.211285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.211401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.211442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.211571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.211597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.211715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.211740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.211820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.211845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.211927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.211952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.212036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.212083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.212203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.212237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.212348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.212373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.212495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.212524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.212651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.212677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.212809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.212834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.212930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.212957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.213090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.213126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.213225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.213252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.213341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.213367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.213460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.213485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.213564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.213590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.213697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.213723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.213824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.213858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.213967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.214001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.214170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.214219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.214308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.214347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.214444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.214472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.214561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.214588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.214670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.214697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.214877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.214923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.215103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.215154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.215287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.215338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.215432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.215457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.215553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.215581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.215698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.215723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.215810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.215836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.215916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.215941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.216025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.216052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.216143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.216169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.216309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.216362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.216445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.216471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.216568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.216594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.423 [2024-12-09 18:15:19.216707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.423 [2024-12-09 18:15:19.216733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.423 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.216849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.216875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.216993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.217018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.217101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.217127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.217208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.217235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.217351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.217377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.217457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.217482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.217594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.217620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.217708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.217733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.217819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.217844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.217922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.217947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.218061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.218086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.218181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.218221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.218333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.218360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.218444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.218469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.218609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.218636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.218722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.218748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.218857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.218883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.218989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.219014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.219125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.219150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.219227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.219252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.219366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.219392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.219471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.219496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.219619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.219644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.219729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.219759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.219844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.219869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.219947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.219972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.220073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.220125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.220248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.220278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.220397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.220424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.220539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.220572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.220672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.220707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.220815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.220841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.220998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.221034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.424 [2024-12-09 18:15:19.221185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.424 [2024-12-09 18:15:19.221220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.424 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.221376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.221401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.221507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.221533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.221632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.221659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.221805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.221852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.221955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.222006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.222142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.222191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.222308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.222334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.222427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.222454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.222559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.222586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.222699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.222724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.222826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.222861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.222982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.223024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.223152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.223177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.223334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.223368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.223484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.223509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.223632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.223657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.223770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.223797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.223896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.223931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.224092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.224136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.224254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.224280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.224396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.224422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.224539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.224570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.224660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.224686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.224791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.224831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.224974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.225022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.225167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.225220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.225331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.225358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.225498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.225524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.225617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.225644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.225748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.225783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.225937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.225988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.226100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.226125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.226234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.226259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.226373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.226399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.226520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.226556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.226642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.226670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.226785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.226811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.226898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.226924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.227012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.425 [2024-12-09 18:15:19.227039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.425 qpair failed and we were unable to recover it. 00:25:56.425 [2024-12-09 18:15:19.227124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.227150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.227269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.227296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.227381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.227406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.227527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.227559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.227656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.227688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.227800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.227834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.227964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.227990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.228078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.228105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.228244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.228269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.228359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.228385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.228467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.228494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.228611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.228637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.228743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.228768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.228847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.228872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.229012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.229038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.229154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.229180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.229271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.229296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.229434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.229478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.229599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.229628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.229746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.229773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.229883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.229924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.230078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.230112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.230302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.230352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.230460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.230486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.230599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.230626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.230705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.230731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.230846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.230872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.230952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.230978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.231068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.231094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.231202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.231229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.231349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.231374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.231464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.231491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.231570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.231596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.231735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.231784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.231915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.231964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.232174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.232227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.232318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.232343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.232436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.232463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.426 [2024-12-09 18:15:19.232555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.426 [2024-12-09 18:15:19.232582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.426 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.232673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.232699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.232808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.232852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.233001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.233035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.233177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.233211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.233356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.233383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.233501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.233529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.233629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.233656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.233768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.233795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.233925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.233972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.234051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.234076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.234170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.234197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.234298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.234337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.234433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.234460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.234575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.234603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.234696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.234722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.234806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.234832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.234911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.234937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.235028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.235054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.235168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.235198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.235311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.235336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.235439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.235464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.235608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.235634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.235748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.235773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.235897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.235932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.236051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.236076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.236209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.236236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.236353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.236379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.236452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.236478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.236619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.236645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.236730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.236759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.236840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.236866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.236946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.236972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.237087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.237112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.237241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.237288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.237402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.237429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.237577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.237605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.237717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.237743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.237856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.237902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.237990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.427 [2024-12-09 18:15:19.238017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.427 qpair failed and we were unable to recover it. 00:25:56.427 [2024-12-09 18:15:19.238130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.238156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.238237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.238263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.238389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.238428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.238556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.238584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.238727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.238754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.238865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.238890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.239006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.239037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.239126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.239152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.239235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.239262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.239401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.239427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.239537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.239573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.239739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.239773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.239914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.239949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.240088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.240121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.240266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.240302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.240414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.240457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.240579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.240606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.240746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.240773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.240857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.240882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.241015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.241062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.241200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.241247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.241382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.241408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.241550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.241576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.241690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.241716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.241856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.241882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.242003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.242029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.242165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.242191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.242313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.242338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.242448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.242475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.242581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.242620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.242750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.242778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.242894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.242920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.243031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.243056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.243189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.243227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.243366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.243393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.243510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.243536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.243685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.243711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.243843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.243878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.244037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.428 [2024-12-09 18:15:19.244062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.428 qpair failed and we were unable to recover it. 00:25:56.428 [2024-12-09 18:15:19.244188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.244223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.244366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.244400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.244512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.244537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.244635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.244660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.244766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.244791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.244959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.244993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.245193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.245242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.245330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.245362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.245458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.245487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.245591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.245618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.245723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.245773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.245856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.245881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.245966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.245994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.246123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.246162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.246289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.246318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.246406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.246432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.246541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.246573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.246658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.246683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.246795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.246820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.246907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.246934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.247051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.247077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.247163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.247190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.247338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.247366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.247450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.247476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.247565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.247592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.247677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.247703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.247815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.247842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.247954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.247979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.248063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.248091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.248177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.248203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.248304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.248343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.248440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.248467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.248578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.248605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.248744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.429 [2024-12-09 18:15:19.248791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.429 qpair failed and we were unable to recover it. 00:25:56.429 [2024-12-09 18:15:19.248880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.248907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.249026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.249052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.249140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.249167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.249282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.249308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.249462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.249501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.249631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.249658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.249744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.249770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.249884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.249909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.250054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.250081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.250257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.250303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.250418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.250446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.250561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.250588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.250677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.250703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.250846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.250872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.250990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.251016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.251123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.251150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.251236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.251263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.251346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.251373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.251485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.251511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.251601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.251627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.251741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.251767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.251844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.251869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.252010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.252044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.252206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.252231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.252388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.252415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.252539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.252574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.252674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.252700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.252823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.252849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.252962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.252988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.253161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.253209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.253348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.253375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.253498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.253538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.253676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.253716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.253835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.253863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.253950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.253976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.254093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.254120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.254206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.254233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.254310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.254336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.254449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.254475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.430 qpair failed and we were unable to recover it. 00:25:56.430 [2024-12-09 18:15:19.254565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.430 [2024-12-09 18:15:19.254592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.254676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.254707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.254818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.254844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.254971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.255007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.255148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.255182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.255299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.255334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.255480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.255505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.255659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.255688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.255804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.255831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.255936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.255987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.256157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.256207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.256294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.256321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.256405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.256432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.256557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.256583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.256675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.256700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.256799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.256825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.256964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.257008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.257146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.257181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.257301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.257345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.257483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.257508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.257593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.257619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.257733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.257759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.257868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.257895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.258017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.258070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.258243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.258281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.258429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.258466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.258622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.258649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.258782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.258808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.258923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.258976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.259118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.259165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.259288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.259332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.259496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.259520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.259613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.259638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.259775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.259800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.259887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.259912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.260061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.260095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.260215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.260256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.260354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.260388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.260528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.260558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.431 [2024-12-09 18:15:19.260633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.431 [2024-12-09 18:15:19.260657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.431 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.260774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.260799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.260906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.260931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.261019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.261044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.261147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.261181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.261291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.261333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.261471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.261496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.261608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.261633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.261770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.261796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.261879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.261904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.262021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.262047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.262180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.262214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.262382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.262443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.262573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.262625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.262718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.262743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.262858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.262883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.262994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.263025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.263243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.263278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.263402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.263427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.263539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.263571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.263683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.263708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.263795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.263820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.263904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.263929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.264093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.264127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.264235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.264260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.264419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.264444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.264556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.264582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.264673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.264698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.264806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.264831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.264924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.264971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.265123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.265157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.265297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.265344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.265505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.265549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.265685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.265713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.265795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.265823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.265969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.266017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.266154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.266200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.266315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.266341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.266457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.266483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.266584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.266649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.432 qpair failed and we were unable to recover it. 00:25:56.432 [2024-12-09 18:15:19.266803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.432 [2024-12-09 18:15:19.266840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.267016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.267051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.267152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.267187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.267325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.267385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.267538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.267598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.267714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.267748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.267920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.267955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.268095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.268128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.268291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.268343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.268427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.268455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.268585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.268625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.268719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.268770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.268917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.268952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.269106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.269142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.269314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.269348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.269487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.269515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.269641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.269669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.269789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.269838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.270014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.270061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.270164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.270211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.270321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.270347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.270464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.270492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.270604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.270643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.270764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.270792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.270947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.270983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.271090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.271124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.271290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.271344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.271474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.271502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.271600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.271627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.271755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.271803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.271956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.272004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.272145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.272193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.272300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.272325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.272443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.272469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.272599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.433 [2024-12-09 18:15:19.272628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.433 qpair failed and we were unable to recover it. 00:25:56.433 [2024-12-09 18:15:19.272726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.272752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.272834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.272860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.273013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.273038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.273129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.273154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.273266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.273291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.273373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.273399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.273511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.273537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.273635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.273661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.273809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.273863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.273967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.274015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.274188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.274235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.274342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.274367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.274506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.274532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.274673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.274721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.274836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.274886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.274978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.275004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.275119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.275145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.275259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.275284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.275396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.275423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.275527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.275573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.275674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.275702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.275829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.275867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.275991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.276019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.276110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.276136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.276221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.276248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.276363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.276388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.276508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.276536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.276633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.276658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.276764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.276800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.276942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.276978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.277135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.277170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.277357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.277392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.277599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.277625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.277710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.277735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.277871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.277905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.278035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.278077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.278221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.278254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.278401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.278434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.278584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.434 [2024-12-09 18:15:19.278631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.434 qpair failed and we were unable to recover it. 00:25:56.434 [2024-12-09 18:15:19.278770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.278809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.279022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.279062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.279200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.279226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.279383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.279421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.279593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.279620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.279730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.279755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.279893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.279930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.280048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.280098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.280251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.280287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.280465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.280500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.280648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.280673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.280814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.280860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.280981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.281015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.281150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.281198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.281297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.281331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.281475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.281511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.281673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.281700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.281785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.281810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.281925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.281950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.282097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.282132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.282281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.282315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.282500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.282597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.282716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.282742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.282829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.282862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.283000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.283035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.283202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.283236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.283408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.283442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.283564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.283610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.283701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.283727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.283840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.283866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.283958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.284009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.284123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.284161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.284311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.284349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.284515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.284588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.284684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.284711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.284799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.284845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.285008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.285044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.285177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.285202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.285313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.435 [2024-12-09 18:15:19.285354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.435 qpair failed and we were unable to recover it. 00:25:56.435 [2024-12-09 18:15:19.285470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.285495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.285603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.285642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.285807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.285864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.286012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.286051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.286206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.286243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.286394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.286431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.286623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.286663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.286756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.286783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.286859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.286906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.287019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.287056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.287185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.287224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.287389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.287428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.287520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.287555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.287672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.287699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.287785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.287811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.287960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.287997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.288136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.288191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.288395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.288434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.288563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.288594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.288687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.288714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.288858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.288906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.289051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.289098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.289212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.289251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.289478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.289515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.289654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.289686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.289798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.289844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.290029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.290065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.290213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.290250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.290436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.290472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.290607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.290633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.290747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.290772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.290920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.290956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.291066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.291103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.291258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.291294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.291462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.291487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.291610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.291649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.291779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.291818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.292003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.292055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.292197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.436 [2024-12-09 18:15:19.292246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.436 qpair failed and we were unable to recover it. 00:25:56.436 [2024-12-09 18:15:19.292360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.292386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.292497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.292523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.292672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.292699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.292792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.292817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.292928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.292954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.293070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.293097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.293212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.293237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.293327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.293354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.293466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.293492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.293628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.293667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.293785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.293811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.293893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.293919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.294041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.294068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.294153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.294179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.294298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.294324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.294409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.294435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.294516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.294542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.294656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.294693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.294807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.294833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.294939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.294964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.295075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.295101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.295239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.295264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.295387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.295426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.295531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.295581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.295684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.295712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.295831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.295875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.296001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.296038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.296186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.296223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.296342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.296368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.296482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.296508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.296622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.296648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.296789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.296825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.297007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.297043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.297170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.297195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.297387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.297422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.297607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.297647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.297746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.297773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.297938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.297984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.298131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.298168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.437 [2024-12-09 18:15:19.298358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.437 [2024-12-09 18:15:19.298395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.437 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.298539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.298575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.298660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.298687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.298826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.298851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.298998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.299034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.299212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.299249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.299425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.299462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.299609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.299638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.299730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.299757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.299870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.299894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.300005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.300053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.300166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.300201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.300338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.300393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.300541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.300582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.300676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.300702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.300787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.300814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.300963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.300988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.301163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.301200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.301317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.301361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.301444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.301469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.301553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.301579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.301671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.301723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.301837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.301873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.302018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.302054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.302162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.302197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.302349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.302374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.302458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.302483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.302624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.302663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.302868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.302923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.303079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.303117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.303273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.303310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.303491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.303528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.303696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.303735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.303908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.303945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.304095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.438 [2024-12-09 18:15:19.304131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.438 qpair failed and we were unable to recover it. 00:25:56.438 [2024-12-09 18:15:19.304251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.304287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.304405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.304430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.304513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.304537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.304659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.304687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.304840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.304875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.305031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.305075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.305224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.305261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.305434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.305459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.305619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.305659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.305778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.305805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.305938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.305982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.306116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.306152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.306295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.306331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.306480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.306505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.306630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.306657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.306799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.306824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.307007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.307043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.307186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.307222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.307327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.307366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.307493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.307518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.307611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.307636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.307777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.307802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.307932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.307974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.308134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.308170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.308324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.308360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.308503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.308528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.308621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.308648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.308740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.308767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.308871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.308908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.309088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.309125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.309277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.309315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.309507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.309551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.309705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.309744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.309846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.309874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.309992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.310029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.310153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.310190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.310374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.310413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.310564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.310591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.310672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.439 [2024-12-09 18:15:19.310697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.439 qpair failed and we were unable to recover it. 00:25:56.439 [2024-12-09 18:15:19.310802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.310827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.310962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.310998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.311112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.311148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.311303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.311340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.311473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.311512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.311622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.311650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.311784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.311836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.312015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.312064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.312212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.312263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.312381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.312407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.312518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.312553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.312644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.312697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.312846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.312882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.313064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.313100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.313227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.313265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.313446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.313474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.313565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.313592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.313700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.313747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.313882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.313931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.314103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.314153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.314243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.314269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.314354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.314379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.314494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.314520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.314611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.314638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.314779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.314804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.314943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.314994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.315084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.315111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.315255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.315283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.315394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.315419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.315561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.315631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.315795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.315834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.315990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.316027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.316179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.316216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.316372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.316417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.316598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.316637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.316729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.316783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.316934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.316971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.317119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.317155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.317295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.317331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.317501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.317526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.317619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.317647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.440 [2024-12-09 18:15:19.317735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.440 [2024-12-09 18:15:19.317761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.440 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.317910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.317947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.318151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.318190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.318351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.318390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.318541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.318576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.318690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.318716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.318812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.318838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.318982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.319031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.319109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.319135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.319317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.319370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.319479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.319505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.319607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.319635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.319721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.319746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.319835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.319861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.319977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.320013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.320136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.320172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.320354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.320389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.320503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.320529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.320652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.320677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.320796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.320824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.321023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.321060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.321238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.321275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.321426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.321462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.321606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.321632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.321771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.321797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.321909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.321934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.322119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.322157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.322347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.322385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.322555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.322594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.322729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.322755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.322842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.322867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.323021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.323061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.323186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.323231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.323408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.323446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.323613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.323638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.323750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.323775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.323866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.323892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.324010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.324052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.324194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.324232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.324393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.324436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.324526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.324556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.324672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.324698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.324833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.324871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.441 qpair failed and we were unable to recover it. 00:25:56.441 [2024-12-09 18:15:19.325007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.441 [2024-12-09 18:15:19.325046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.325254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.325292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.325446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.325484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.325674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.325700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.325784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.325810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.325891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.325936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.326098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.326135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.326278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.326315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.326464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.326505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.326638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.326665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.326772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.326822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.327021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.327059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.327220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.327263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.327422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.327460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.327627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.327653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.327764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.327791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.327921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.327971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.328124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.328163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.328340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.328378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.328555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.328598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.328724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.328751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.328829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.328855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.328962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.329015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.329159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.329202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.329314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.329341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.329455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.329480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.329597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.329626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.329774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.329813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.329907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.329936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.330029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.330055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.330150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.330176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.330288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.330313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.330447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.330472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.330577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.330616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.330711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.330739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.330909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.330948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.331095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.331134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.331242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.331280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.442 qpair failed and we were unable to recover it. 00:25:56.442 [2024-12-09 18:15:19.331397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.442 [2024-12-09 18:15:19.331434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.331564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.331589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.331698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.331723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.331799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.331824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.331901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.331927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.332030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.332076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.332230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.332268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.332414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.332453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.332597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.332623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.332730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.332755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.332839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.332868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.333024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.333049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.333294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.333333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.333486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.333512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.333658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.333684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.333794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.333820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.333928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.333954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.334089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.334129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.334258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.334304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.334490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.334567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.334733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.334760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.334885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.334910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.334990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.335016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.335123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.335169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.335460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.335523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.335709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.335734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.335884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.335946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.336105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.336165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.336363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.336426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.336587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.336613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.336729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.336754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.336857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.336894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.337190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.443 [2024-12-09 18:15:19.337287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.443 qpair failed and we were unable to recover it. 00:25:56.443 [2024-12-09 18:15:19.337477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.337502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.337606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.337632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.337717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.337742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.337839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.337864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.338010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.338052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.338208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.338233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.338532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.338615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.338714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.338739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.338908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.338946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.339102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.339140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.339329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.339366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.339496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.339536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.339657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.339682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.339827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.339853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.339984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.340022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.340217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.340255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.340371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.340408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.340527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.340560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.340704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.340730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.340816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.340841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.341038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.341075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.341290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.341327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.341482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.341520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.341669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.341694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.341816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.341841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.341953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.341978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.342156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.342199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.342370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.342407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.342535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.342569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.342660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.342685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.342829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.342867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.342985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.343023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.343186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.343223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.343370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.343407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.343591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.343617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.343722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.343747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.343848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.343873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.343957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.343982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.344073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.344111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.344285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.344322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.344477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.344515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.344655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.344680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.444 [2024-12-09 18:15:19.344794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.444 [2024-12-09 18:15:19.344819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.444 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.344902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.344928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.345077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.345115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.345234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.345284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.345430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.345467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.345626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.345651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.345736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.345791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.345938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.345975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.346101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.346139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.346276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.346302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.346434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.346473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.346623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.346654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.346797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.346835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.346957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.346998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.347108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.347145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.347293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.347330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.347480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.347517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.347664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.347703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.347848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.347884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.348071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.348108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.348254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.348316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.348489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.348514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.348607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.348633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.348767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.348792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.348967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.349003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.349251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.349315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.349534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.349615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.349732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.349757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.349866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.349905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.350090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.350128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.350262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.350299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.350485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.350522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.350673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.350711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.350895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.350933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.351089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.351126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.351239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.351278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.351409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.351447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.351648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.351673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.351778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.351803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.351884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.351910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.351992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.352017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.352146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.352184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.352338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.352376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.445 [2024-12-09 18:15:19.352560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.445 [2024-12-09 18:15:19.352598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.445 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.352719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.352757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.352914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.352951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.353061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.353099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.353260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.353298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.353408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.353446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.353592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.353630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.353760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.353797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.353926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.353964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.354116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.354160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.354273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.354310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.354504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.354541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.354710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.354748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.354905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.354942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.355072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.355109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.355254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.355291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.355416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.355453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.355614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.355653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.355809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.355846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.355961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.356000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.356153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.356190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.356350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.356388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.356492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.356530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.356710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.356748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.356876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.356914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.357069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.357109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.357252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.357289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.357479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.357537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.357762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.357840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.358101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.358177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.358429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.358487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.358779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.358857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.359086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.359163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.359367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.359424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.359676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.359754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.360019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.446 [2024-12-09 18:15:19.360097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.446 qpair failed and we were unable to recover it. 00:25:56.446 [2024-12-09 18:15:19.360325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.360392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.360603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.360643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.360814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.360854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.361003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.361041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.361211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.361250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.361423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.361472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.361633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.361674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.361838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.361878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.362044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.362083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.362242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.362282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.362448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.362487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.362656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.362696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.362834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.362873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.363067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.363107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.363233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.363274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.363434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.363474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.363610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.363650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.363818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.363857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.364053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.364093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.364252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.364292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.364450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.364489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.364631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.364671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.364821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.364861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.364987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.365025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.365158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.365198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.365354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.365393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.365561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.365602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.365743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.365791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.365988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.366029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.366186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.366225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.366413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.366453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.366608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.366649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.366817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.366858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.367049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.367088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.367209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.367249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.367402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.367441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.367605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.367646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.367840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.367879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.368013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.368052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.368196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.368236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.368395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.368434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.368575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.368625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.368797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.447 [2024-12-09 18:15:19.368836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-12-09 18:15:19.368974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.369014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.369181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.369221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.369388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.369427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.369566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.369607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.369812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.369855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.369987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.370026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.370156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.370195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.370347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.370387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.370541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.370591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.370762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.370804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.370984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.371026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.371235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.371276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.371428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.371469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.371657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.371699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.371904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.371944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.372107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.372146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.372274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.372313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.372481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.372520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.372657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.372715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.372851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.372892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.373068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.373109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.373240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.373282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.373462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.373515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.373721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.373762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.373904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.373942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.374086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.374135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.374335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.374374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.374538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.374587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.374755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.374796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.374938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.374977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.375105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.375145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.375357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.375398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.375560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.375603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.375741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.375782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.375956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.375997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.376161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.376203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.376379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.376421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.376588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.376631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.376774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.376817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.376984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.377026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.377179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.377226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.377394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.377437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.377601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.448 [2024-12-09 18:15:19.377644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-12-09 18:15:19.377823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.377865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.378077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.378119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.378330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.378371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.378552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.378596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.378812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.378854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.379017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.379058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.379265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.379307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.379424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.379466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.379595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.379637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.379786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.379840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.380021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.380064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.380229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.380270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.380415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.380456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.380632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.380676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.380851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.380892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.381032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.381073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.381211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.381252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.381404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.381447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.381588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.381643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.381811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.381853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.381980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.382021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.382196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.382237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.382382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.382423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.382629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.382672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.382814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.382856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.382987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.383028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.383195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.383236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.383391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.383432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.383607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.383650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.383789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.383830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.384002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.384043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.384195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.384237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.384401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.384443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.384587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.384635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.384800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.384842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.384977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.385018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.385138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.385186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.385321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.385362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.385536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.385620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.385819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.385865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.386043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.386086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.386228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.386271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.386402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.449 [2024-12-09 18:15:19.386446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-12-09 18:15:19.386623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.386668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.386798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.386842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.387010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.387053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.387228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.387270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.387412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.387457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.387607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.387651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.387863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.387905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.388085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.388127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.388288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.388329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.388459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.388500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.388659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.388703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.388884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.388926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.389126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.389167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.389367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.389409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.389559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.389604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.389726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.389767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.390004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.390048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.390178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.390224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.390388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.390432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.390572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.390628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.390775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.390833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.391012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.391056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.391238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.391282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.391448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.391493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.391674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.391718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.391898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.391942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.392161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.392204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.392336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.392398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.392599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.392630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.392739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.392769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.392899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.392929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.393056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.393086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.393214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.393243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.393345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.393376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.393490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.393535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.393721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.393755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.393862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.393906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.394021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.450 [2024-12-09 18:15:19.394047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.450 qpair failed and we were unable to recover it. 00:25:56.450 [2024-12-09 18:15:19.394138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.394165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.394264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.394290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.394376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.394402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.394484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.394510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.394607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.394634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.394746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.394771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.394857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.394883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.394966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.394992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.395121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.395155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.395283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.395321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.395453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.395484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.395620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.395647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.395741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.395767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.395858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.395884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.396016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.396047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.396151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.396183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.396318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.396350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.396466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.396497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.396604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.396631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.396739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.396770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.396917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.396962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.397102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.397145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.397225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.397250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.397340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.397365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.397460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.397486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.397580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.397607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.397689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.397714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.397804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.397829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.397919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.397944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.398030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.398056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.398141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.398166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.398248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.398274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.398397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.398425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.398551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.398578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.398662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.398688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.398802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.398828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.398955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.398985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.399104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.399130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.399273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.399319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.399402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.399428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.399510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.399536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.399685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.399728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.451 qpair failed and we were unable to recover it. 00:25:56.451 [2024-12-09 18:15:19.399807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.451 [2024-12-09 18:15:19.399832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.400007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.400054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.400153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.400178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.400294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.400320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.400398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.400423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.400512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.400539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.400637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.400663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.400751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.400777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.400959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.401012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.401126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.401166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.401292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.401322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.401440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.401467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.401605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.401651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.401784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.401815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.401941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.401988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.402128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.402176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.402261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.402287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.402400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.402425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.402512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.402537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.402633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.402658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.402757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.402782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.402898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.402928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.403015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.403041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.403153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.403178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.403259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.403285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.403375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.403400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.403486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.403511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.403608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.403634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.403740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.403765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.403877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.403902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.403985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.404010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.404089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.404114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.404202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.404227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.404314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.404340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.404449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.404474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.404571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.404599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.404678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.404703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.404818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.404843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.404922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.404947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.405066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.405091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.405175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.405200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.405314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.405339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.405430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.452 [2024-12-09 18:15:19.405456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.452 qpair failed and we were unable to recover it. 00:25:56.452 [2024-12-09 18:15:19.405530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.405562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.405665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.405704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.405796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.405824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.405938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.405964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.406099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.406124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.406209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.406241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.406331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.406356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.406503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.406529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.406644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.406690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.406792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.406823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.406953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.407004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.407148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.407192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.407280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.407305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.407447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.407472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.407556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.407588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.407679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.407704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.407792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.407819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.407929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.407955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.408066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.408092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.408184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.408209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.408326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.408352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.408439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.408464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.408561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.408588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.408703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.408729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.408816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.408841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.408974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.409005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.409130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.409162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.409263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.409293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.409429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.409454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.409538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.409576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.409667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.409692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.409805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.409830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.409950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.409976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.410084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.410109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.410258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.410289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.410383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.410414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.410518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.410556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.410688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.410714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.410859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.410885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.411004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.411029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.411132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.411163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.411259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.411292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.453 [2024-12-09 18:15:19.411464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.453 [2024-12-09 18:15:19.411495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.453 qpair failed and we were unable to recover it. 00:25:56.454 [2024-12-09 18:15:19.411634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.454 [2024-12-09 18:15:19.411660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.454 qpair failed and we were unable to recover it. 00:25:56.454 [2024-12-09 18:15:19.411759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.454 [2024-12-09 18:15:19.411785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.454 qpair failed and we were unable to recover it. 00:25:56.454 [2024-12-09 18:15:19.411875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.454 [2024-12-09 18:15:19.411905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.454 qpair failed and we were unable to recover it. 00:25:56.454 [2024-12-09 18:15:19.412003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.454 [2024-12-09 18:15:19.412034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.454 qpair failed and we were unable to recover it. 00:25:56.454 [2024-12-09 18:15:19.412168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.454 [2024-12-09 18:15:19.412199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.454 qpair failed and we were unable to recover it. 00:25:56.731 [2024-12-09 18:15:19.412354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.731 [2024-12-09 18:15:19.412386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.731 qpair failed and we were unable to recover it. 00:25:56.731 [2024-12-09 18:15:19.412496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.731 [2024-12-09 18:15:19.412523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.731 qpair failed and we were unable to recover it. 00:25:56.731 [2024-12-09 18:15:19.412666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.731 [2024-12-09 18:15:19.412705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.731 qpair failed and we were unable to recover it. 00:25:56.731 [2024-12-09 18:15:19.412795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.731 [2024-12-09 18:15:19.412823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.731 qpair failed and we were unable to recover it. 00:25:56.731 [2024-12-09 18:15:19.412909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.731 [2024-12-09 18:15:19.412935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.731 qpair failed and we were unable to recover it. 00:25:56.731 [2024-12-09 18:15:19.413028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.731 [2024-12-09 18:15:19.413054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.731 qpair failed and we were unable to recover it. 00:25:56.731 [2024-12-09 18:15:19.413139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.731 [2024-12-09 18:15:19.413166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.731 qpair failed and we were unable to recover it. 00:25:56.731 [2024-12-09 18:15:19.413306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.731 [2024-12-09 18:15:19.413337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.731 qpair failed and we were unable to recover it. 00:25:56.731 [2024-12-09 18:15:19.413435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.731 [2024-12-09 18:15:19.413468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.731 qpair failed and we were unable to recover it. 00:25:56.731 [2024-12-09 18:15:19.413613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.731 [2024-12-09 18:15:19.413640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.731 qpair failed and we were unable to recover it. 00:25:56.731 [2024-12-09 18:15:19.413727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.731 [2024-12-09 18:15:19.413753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.731 qpair failed and we were unable to recover it. 00:25:56.731 [2024-12-09 18:15:19.413874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.731 [2024-12-09 18:15:19.413899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.731 qpair failed and we were unable to recover it. 00:25:56.731 [2024-12-09 18:15:19.414005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.731 [2024-12-09 18:15:19.414030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.731 qpair failed and we were unable to recover it. 00:25:56.731 [2024-12-09 18:15:19.414128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.731 [2024-12-09 18:15:19.414158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.731 qpair failed and we were unable to recover it. 00:25:56.731 [2024-12-09 18:15:19.414247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.731 [2024-12-09 18:15:19.414277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.731 qpair failed and we were unable to recover it. 00:25:56.731 [2024-12-09 18:15:19.414407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.414438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.414542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.414572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.414684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.414710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.414798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.414823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.414940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.414966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.415049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.415075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.415171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.415197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.415324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.415357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.415489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.415519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.415644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.415671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.415759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.415786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.415902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.415927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.416010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.416036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.416135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.416166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.416312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.416359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.416502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.416535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.416653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.416679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.416758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.416783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.416862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.416889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.416989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.417027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.417181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.417220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.417336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.417377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.417506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.417538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.417629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.417657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.417742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.417768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.417875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.417900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.417984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.418009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.418094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.418120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.418252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.418282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.418407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.418438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.418529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.418593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.418707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.418732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.418813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.418838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.418929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.418954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.419098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.419129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.419291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.419326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.419434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.419466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.419588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.419614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.732 [2024-12-09 18:15:19.419725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.732 [2024-12-09 18:15:19.419752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.732 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.419839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.419865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.420001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.420027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.420209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.420248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.420380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.420419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.420564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.420604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.420702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.420728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.420818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.420843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.420966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.421008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.421099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.421127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.421215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.421243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.421337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.421367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.421455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.421480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.421595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.421622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.421717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.421745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.421857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.421883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.421978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.422005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.422113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.422145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.422256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.422298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.422428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.422459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.422575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.422619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.422740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.422766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.422848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.422881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.423020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.423074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.423223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.423279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.423440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.423478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.423645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.423671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.423753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.423779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.423858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.423886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.423997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.424034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.424172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.424222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.424344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.424374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.424489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.424520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.424647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.424686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.424779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.424806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.424957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.425007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.425150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.425198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.425275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.425300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.425407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.733 [2024-12-09 18:15:19.425446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.733 qpair failed and we were unable to recover it. 00:25:56.733 [2024-12-09 18:15:19.425591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.425619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.425716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.425742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.425925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.425980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.426142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.426192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.426279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.426310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.426420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.426446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.426597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.426623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.426712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.426737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.426856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.426883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.426995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.427020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.427131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.427157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.427283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.427314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.427447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.427483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.427627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.427653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.427740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.427766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.427843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.427870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.427991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.428025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.428136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.428167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.428272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.428304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.428471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.428502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.428624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.428650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.428762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.428788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.428883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.428909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.429073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.429104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.429207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.429238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.429364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.429395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.429530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.429565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.429661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.429688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.429771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.429796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.429893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.429919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.430033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.430059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.430144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.430170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.430295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.430325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.430428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.430460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.430634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.430661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.430752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.430778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.430877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.430903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.431016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.431042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.431180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.734 [2024-12-09 18:15:19.431210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.734 qpair failed and we were unable to recover it. 00:25:56.734 [2024-12-09 18:15:19.431327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.431359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.431486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.431518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.431665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.431694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.431783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.431809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.431944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.431989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.432140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.432183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.432290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.432320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.432477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.432502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.432620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.432646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.432778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.432823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.432965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.433010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.433154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.433200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.433312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.433338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.433429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.433454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.433628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.433659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.433759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.433784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.433900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.433925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.434039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.434065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.434173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.434199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.434308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.434333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.434420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.434446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.434562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.434588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.434676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.434701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.434791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.434817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.434950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.434975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.435090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.435116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.435224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.435249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.435349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.435374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.435488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.435515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.435647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.435674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.435761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.435787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.435930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.435956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.436055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.436080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.436170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.436195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.436303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.436329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.436411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.436436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.436556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.436582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.436673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.436698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.735 qpair failed and we were unable to recover it. 00:25:56.735 [2024-12-09 18:15:19.436819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.735 [2024-12-09 18:15:19.436851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.436977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.437002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.437122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.437148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.437239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.437264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.437377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.437402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.437482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.437507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.437641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.437667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.437782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.437807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.437900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.437925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.438037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.438062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.438148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.438173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.438256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.438282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.438361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.438387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.438484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.438510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.438604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.438630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.438740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.438765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.438845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.438884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.438971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.438997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.439113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.439138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.439253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.439278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.439388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.439413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.439491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.439516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.439625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.439651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.439760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.439786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.439896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.439921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.440005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.440030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.440118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.440144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.440229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.440254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.440368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.440393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.440498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.440523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.440627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.440653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.440766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.440791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.440922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.440947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.441099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.441124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.441263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.736 [2024-12-09 18:15:19.441288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.736 qpair failed and we were unable to recover it. 00:25:56.736 [2024-12-09 18:15:19.441366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.441391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.441498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.441523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.441637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.441662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.441747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.441772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.441900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.441925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.442040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.442065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.442163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.442188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.442297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.442323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.442433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.442462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.442559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.442584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.442669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.442694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.442776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.442801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.442924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.442949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.443039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.443065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.443155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.443180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.443332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.443357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.443443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.443468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.443567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.443594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.443673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.443698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.443775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.443801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.443940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.443965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.444052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.444077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.444172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.444198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.444287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.444311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.444395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.444420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.444541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.444574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.444699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.444724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.444809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.444834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.444953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.444978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.445097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.445122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.445236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.445261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.445350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.445375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.445448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.445473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.445562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.445588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.445708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.445733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.445815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.445844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.445934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.445959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.446071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.446096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.446198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.446223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.737 [2024-12-09 18:15:19.446334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.737 [2024-12-09 18:15:19.446359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.737 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.446441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.446469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.446560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.446585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.446675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.446701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.446783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.446808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.446918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.446943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.447064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.447089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.447203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.447240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.447334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.447362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.447446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.447473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.447602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.447630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.447712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.447738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.447820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.447846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.447994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.448042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.448128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.448153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.448280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.448306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.448399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.448424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.448508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.448533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.448663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.448702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.448826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.448854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.448933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.448959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.449091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.449117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.449204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.449230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.449346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.449376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.449459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.449485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.449579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.449605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.449716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.449742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.449859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.449889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.450016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.450047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.450175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.450206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.450361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.450409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.450510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.450559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.450656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.450685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.450788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.450816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.450944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.450970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.451104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.451136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.451259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.451297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.451456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.451494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.451659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.451686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.451785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.738 [2024-12-09 18:15:19.451811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.738 qpair failed and we were unable to recover it. 00:25:56.738 [2024-12-09 18:15:19.451977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.452002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.452089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.452116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.452231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.452268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.452394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.452439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.452625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.452652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.452766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.452792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.452887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.452913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.453001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.453028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.453147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.453184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.453351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.453383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.453564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.453623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.453763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.453801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.453930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.453957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.454085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.454112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.454197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.454223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.454357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.454389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.454485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.454516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.454631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.454657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.454733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.454759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.454841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.454867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.454968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.454994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.455095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.455127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.455272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.455302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.455391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.455428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.455586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.455613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.455721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.455747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.455823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.455848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.455931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.455957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.456050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.456076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.456166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.456192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.456288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.456326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.456446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.456474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.456587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.456615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.456707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.456733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.456820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.456847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.456967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.456993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.457109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.457137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.457249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.457281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.457413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.457445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.739 qpair failed and we were unable to recover it. 00:25:56.739 [2024-12-09 18:15:19.457577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.739 [2024-12-09 18:15:19.457622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.457705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.457732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.457844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.457872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.457961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.457987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.458120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.458151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.458264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.458295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.458388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.458418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.458553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.458580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.458687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.458712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.458791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.458816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.458894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.458920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.459027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.459061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.459207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.459241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.459375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.459422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.459603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.459630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.459742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.459768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.459850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.459875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.459962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.459990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.460136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.460163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.460313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.460347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.460526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.460573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.460675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.460701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.460809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.460835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.460926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.460952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.461046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.461076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.461181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.461213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.461343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.461377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.461493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.461524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.461689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.461715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.461827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.461852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.461943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.461992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.462190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.462221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.462343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.462379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.462490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.462525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.740 [2024-12-09 18:15:19.462644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.740 [2024-12-09 18:15:19.462671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.740 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.462754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.462781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.462895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.462931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.463071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.463106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.463264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.463314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.463450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.463481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.463609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.463636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.463729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.463756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.463832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.463868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.463951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.463987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.464131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.464156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.464242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.464267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.464380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.464411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.464536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.464597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.464685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.464711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.464826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.464852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.464960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.464985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.465119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.465149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.465270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.465308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.465505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.465557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.465677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.465704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.465856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.465882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.465978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.466023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.466171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.466206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.466328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.466378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.466527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.466613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.466735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.466763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.466847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.466882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.466968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.467004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.467115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.467140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.467241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.467291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.467474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.467507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.467651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.467698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.467871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.467903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.467999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.468031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.468159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.468191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.468295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.468326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.468459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.468490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.468634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.468667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.468805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.741 [2024-12-09 18:15:19.468854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.741 qpair failed and we were unable to recover it. 00:25:56.741 [2024-12-09 18:15:19.469033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.469082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.469179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.469209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.469335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.469364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.469496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.469526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.469720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.469764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.469951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.470010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.470203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.470244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.470421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.470463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.470613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.470645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.470801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.470832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.471018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.471059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.471277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.471318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.471445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.471505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.471685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.471717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.471824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.471881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.472062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.472098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.472237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.472294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.472426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.472468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.472611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.472643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.472803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.472835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.472992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.473027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.473162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.473197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.473370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.473405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.473562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.473610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.473717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.473748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.473847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.473901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.474031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.474089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.474290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.474331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.474523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.474575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.474665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.474696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.474827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.474858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.474988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.475024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.475178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.475214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.475378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.475415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.475579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.475611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.475740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.475772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.475934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.475971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.476143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.476180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.476333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.742 [2024-12-09 18:15:19.476371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.742 qpair failed and we were unable to recover it. 00:25:56.742 [2024-12-09 18:15:19.476514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.476560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.476677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.476710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.476815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.476846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.477007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.477038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.477171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.477223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.477411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.477448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.477569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.477617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.477715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.477746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.477851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.477882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.477990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.478028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.478226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.478263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.478431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.478472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.478662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.478694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.478798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.478830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.478966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.479014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.479200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.479248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.479394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.479451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.479661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.479692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.479803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.479840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.480031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.480078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.480210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.480246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.480437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.480498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.480678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.480726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.480901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.480956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.481054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.481086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.481265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.481319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.481474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.481505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.481643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.481682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.481832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.481870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.482052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.482089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.482211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.482268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.482420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.482452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.482585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.482617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.482710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.482742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.482875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.482912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.483024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.483061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.483217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.483254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.483386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.483422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.743 [2024-12-09 18:15:19.483606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.743 [2024-12-09 18:15:19.483638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.743 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.483735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.483766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.483921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.483975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.484140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.484193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.484328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.484379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.484483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.484515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.484667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.484722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.484851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.484903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.485086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.485136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.485244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.485276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.485377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.485408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.485500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.485531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.485681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.485713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.485804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.485835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.485964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.485995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.486104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.486136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.486265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.486296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.486403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.486434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.486540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.486579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.486688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.486718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.486848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.486884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.487024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.487055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.487159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.487189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.487287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.487317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.487442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.487473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.487614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.487646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.487778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.487809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.487950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.487981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.488108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.488138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.488294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.488325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.488432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.488463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.488594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.488626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.488713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.488744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.488844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.488875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.489007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.744 [2024-12-09 18:15:19.489038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.744 qpair failed and we were unable to recover it. 00:25:56.744 [2024-12-09 18:15:19.489167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.489198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.489326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.489357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.489469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.489500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.489605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.489636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.489795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.489826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.489954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.489984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.490076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.490106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.490264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.490294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.490422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.490452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.490570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.490602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.490695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.490726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.490823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.490854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.491018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.491050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.491149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.491179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.491334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.491365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.491513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.491579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.491695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.491749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.491863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.491919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.492106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.492146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.492324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.492363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.492578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.492630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.492816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.492870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.493040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.493079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.493248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.493302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.493431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.493461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.493606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.493643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.493771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.493802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.493943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.493973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.494065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.494097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.494229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.494260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.494380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.494428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.494539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.494581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.494696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.494729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.494924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.494978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.495108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.495146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.495276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.495312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.495495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.495527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.495651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.495682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.495838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.745 [2024-12-09 18:15:19.495888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.745 qpair failed and we were unable to recover it. 00:25:56.745 [2024-12-09 18:15:19.496036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.496086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.496249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.496299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.496431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.496462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.496594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.496641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.496754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.496787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.496893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.496924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.497030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.497061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.497194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.497224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.497350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.497381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.497492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.497524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.497642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.497673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.497770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.497800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.497969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.498000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.498121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.498174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.498273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.498304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.498441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.498472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.498618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.498650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.498777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.498807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.498921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.498951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.499083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.499114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.499252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.499282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.499417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.499448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.499572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.499604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.499712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.499742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.499850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.499881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.499982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.500013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.500104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.500142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.500275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.500305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.500436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.500467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.500585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.500617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.500770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.500817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.500921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.500955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.501063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.501096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.501196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.501227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.501346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.501378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.501481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.501511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.501671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.501723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.501871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.746 [2024-12-09 18:15:19.501925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.746 qpair failed and we were unable to recover it. 00:25:56.746 [2024-12-09 18:15:19.502084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.502135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.502236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.502266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.502395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.502425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.502518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.502557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.502691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.502722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.502862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.502892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.502997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.503028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.503165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.503195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.503290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.503320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.503422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.503453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.503554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.503586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.503677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.503708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.503842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.503873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.504029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.504060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.504184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.504215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.504332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.504363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.504494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.504527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.504647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.504679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.504789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.504819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.504977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.505014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.505168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.505205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.505350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.505387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.505557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.505590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.505749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.505798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.505896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.505927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.506029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.506060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.506201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.506250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.506372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.506403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.506532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.506574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.506760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.506816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.506973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.507021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.507152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.507203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.507336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.507366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.507497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.507528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.507674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.507705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.507811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.507841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.507967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.507998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.508126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.508157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.508254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.508285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.747 qpair failed and we were unable to recover it. 00:25:56.747 [2024-12-09 18:15:19.508390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.747 [2024-12-09 18:15:19.508421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.508561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.508593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.508754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.508805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.508964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.509016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.509148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.509178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.509286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.509316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.509450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.509480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.509644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.509701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.509848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.509903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.510037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.510075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.510251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.510288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.510418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.510454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.510598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.510630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.510788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.510839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.510993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.511032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.511211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.511261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.511400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.511433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.511608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.511649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.511770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.511807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.511924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.511962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.512093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.512131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.512283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.512330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.512472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.512503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.512663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.512709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.512880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.512921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.513050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.513089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.513240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.513279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.513430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.513468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.513614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.513646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.513784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.513823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.514023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.514062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.514224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.514263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.514383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.514431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.514542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.514585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.514696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.514729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.514835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.514865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.514971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.515001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.515116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.515156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.515298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.515338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.748 [2024-12-09 18:15:19.515500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.748 [2024-12-09 18:15:19.515539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.748 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.515710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.515741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.515862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.515908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.516068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.516122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.516245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.516287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.516393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.516424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.516587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.516653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.516836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.516894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.517042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.517085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.517216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.517256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.517404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.517434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.517568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.517600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.517699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.517730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.517868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.517906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.518060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.518097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.518218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.518256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.518398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.518430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.518561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.518599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.518723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.518773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.518932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.518987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.519079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.519111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.519235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.519291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.519423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.519453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.519632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.519692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.519871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.519912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.520046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.520088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.520212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.520253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.520388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.520430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.520601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.520633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.520763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.520803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.520935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.520976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.521104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.521143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.521285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.521327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.521467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.521498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.521642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.521673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.521834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.521885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.522002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.522055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.749 [2024-12-09 18:15:19.522149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.749 [2024-12-09 18:15:19.522180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.749 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.522286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.522320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.522483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.522515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.522662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.522695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.522819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.522858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.523008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.523048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.523153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.523192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.523376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.523429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.523542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.523580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.523701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.523755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.523915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.523967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.524153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.524202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.524298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.524329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.524425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.524455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.524616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.524648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.524780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.524811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.524904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.524935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.525062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.525093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.525191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.525221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.525370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.525417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.525531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.525583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.525715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.525747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.525844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.525875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.525964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.525995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.526092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.526123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.526286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.526336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.526433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.526464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.526648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.526706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.526840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.526881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.527034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.527073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.527192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.527230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.527355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.527393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.527553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.527603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.527721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.527759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.527910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.527948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.528104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.528143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.528260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.528297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.528449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.528487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.528634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.528665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.528763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.528797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.528978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.529029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.529125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.529155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.529282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.750 [2024-12-09 18:15:19.529334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.750 qpair failed and we were unable to recover it. 00:25:56.750 [2024-12-09 18:15:19.529438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.529469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.529626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.529677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.529812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.529845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.529939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.529969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.530089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.530126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.530225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.530255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.530377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.530407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.530509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.530540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.530646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.530675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.530777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.530829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.530997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.531034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.531228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.531265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.531437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.531478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.531664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.531695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.531844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.531907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.532081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.532132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.532283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.532333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.532465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.532496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.532695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.532746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.532882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.532913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.533042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.533073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.533174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.533204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.533332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.533363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.533464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.533494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.533661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.533692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.533785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.533816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.533920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.533952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.534056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.534088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.534233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.534264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.534419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.534450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.534604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.534636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.534746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.534777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.534910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.534943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.535072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.535103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.535230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.535261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.535363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.535393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.535499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.535529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.535670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.535701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.535828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.535865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.536019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.536057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.536173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.536211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.536326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.536363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.751 [2024-12-09 18:15:19.536516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.751 [2024-12-09 18:15:19.536569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.751 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.536715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.536746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.536905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.536943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.537065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.537103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.537246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.537284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.537422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.537452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.537598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.537646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.537756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.537790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.537978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.538018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.538209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.538247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.538371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.538410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.538610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.538642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.538781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.538828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.538977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.539016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.539153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.539207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.539387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.539419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.539528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.539571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.539709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.539739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.539867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.539919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.540054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.540091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.540227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.540283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.540431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.540471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.540663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.540695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.540828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.540859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.540958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.540991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.541141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.541179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.541312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.541368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.541601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.541633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.541734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.541764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.541871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.541924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.542118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.542156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.542283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.542320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.542443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.542474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.542592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.542629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.542791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.542842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.542973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.543029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.543171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.543211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.543402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.543441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.543624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.543656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.543744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.543775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.543972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.544011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.544145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.544197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.544313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.544352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.544467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.544505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.544661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.544692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.544830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.752 [2024-12-09 18:15:19.544879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.752 qpair failed and we were unable to recover it. 00:25:56.752 [2024-12-09 18:15:19.545026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.545063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.545239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.545276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.545416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.545456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.545638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.545669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.545809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.545839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.546005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.546035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.546225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.546272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.546420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.546471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.546665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.546696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.546800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.546831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.546979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.547016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.547269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.547335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.547516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.547561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.547690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.547719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.547827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.547857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.548016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.548046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.548198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.548237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.548402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.548441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.548613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.548644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.548747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.548777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.548944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.548974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.549137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.549174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.549334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.549371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.549500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.549538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.549729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.549767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.549951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.549988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.550137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.550174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.550300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.550338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.550496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.550533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.550673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.550710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.550838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.550875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.551059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.551096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.551245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.551282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.551432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.551469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.551613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.551652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.551823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.551898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.552119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.552182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.552393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.552454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.552667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.552731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.552910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.552978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.553173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.553235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.753 qpair failed and we were unable to recover it. 00:25:56.753 [2024-12-09 18:15:19.553493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.753 [2024-12-09 18:15:19.553533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.553752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.553792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.554012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.554049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.554307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.554369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.554521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.554565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.554753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.554790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.554919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.554956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.555150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.555187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.555345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.555382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.555532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.555580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.555753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.555798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.555919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.555959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.556122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.556163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.556331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.556370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.556531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.556590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.556709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.556748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.556912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.556952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.557141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.557181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.557343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.557383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.557560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.557601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.557763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.557803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.557936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.557975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.558144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.558183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.558350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.558388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.558521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.558572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.558727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.558766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.558899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.558939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.559063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.559103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.559295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.559335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.559531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.559581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.559708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.559748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.559885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.559924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.560058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.560099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.560258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.560297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.560488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.560564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.560738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.560782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.560923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.560964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.561122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.561171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.561372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.561412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.561580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.561621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.561795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.561835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.562028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.562068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.562265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.562306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.562477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.562518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.562690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.562730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.562907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.562947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.754 [2024-12-09 18:15:19.563148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.754 [2024-12-09 18:15:19.563188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.754 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.563321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.563362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.563526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.563577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.563712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.563753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.563953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.563993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.564138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.564179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.564334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.564374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.564503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.564553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.564724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.564764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.564959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.564999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.565161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.565201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.565397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.565437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.565579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.565620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.565758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.565798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.565965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.566004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.566201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.566241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.566460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.566503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.566645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.566689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.566881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.566923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.567129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.567172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.567300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.567343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.567500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.567543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.567720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.567763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.567929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.567971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.568114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.568157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.568283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.568327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.568477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.568519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.568701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.568744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.568884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.568927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.569090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.569132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.569313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.569355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.569489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.569539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.569731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.569775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.569947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.569989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.570133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.570176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.570377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.570440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.570663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.570729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.570889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.570931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.571096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.571139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.571334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.571400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.571609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.571654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.571801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.571844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.571985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.572030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.572208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.572250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.572385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.572428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.572643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.755 [2024-12-09 18:15:19.572687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.755 qpair failed and we were unable to recover it. 00:25:56.755 [2024-12-09 18:15:19.572877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.572919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.573108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.573149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.573308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.573348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.573509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.573557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.573731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.573772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.573937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.573976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.574112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.574153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.574305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.574345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.574503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.574554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.574676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.574717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.574888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.574929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.575087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.575127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.575297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.575337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.575489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.575529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.575701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.575741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.575905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.575946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.576088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.576128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.576323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.576363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.576540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.576602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.576763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.576803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.576979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.577018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.577136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.577177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.577331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.577370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.577569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.577610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.577777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.577817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.578016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.578063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.578236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.578275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.578415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.578455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.578584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.578626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.578747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.578787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.578948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.578990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.579185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.579225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.579382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.579422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.579581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.579623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.579788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.579828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.580022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.580062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.580257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.580297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.580456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.580495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.580633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.580673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.580813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.580853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.581023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.581061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.581184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.581223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.581380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.581419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.581573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.581613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.756 [2024-12-09 18:15:19.581751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.756 [2024-12-09 18:15:19.581790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.756 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.581955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.581996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.582193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.582233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.582351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.582390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.582590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.582629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.582821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.582860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.583024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.583064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.583262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.583301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.583561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.583624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.583832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.583875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.584034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.584076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.584206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.584246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.584418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.584457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.584627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.584670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.584832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.584873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.585031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.585070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.585182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.585222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.585386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.585427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.585593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.585634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.585754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.585795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.585987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.586028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.586193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.586242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.586449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.586490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.586667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.586709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.586868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.586908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.587074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.587114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.587308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.587349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.587500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.587556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.587741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.587787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.587930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.587977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.588152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.588197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.588380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.588426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.588611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.588658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.588838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.588885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.589081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.589126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.589318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.589364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.589511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.589568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.589720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.589766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.589944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.589988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.590172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.590219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.590398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.590444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.590684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.757 [2024-12-09 18:15:19.590735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.757 qpair failed and we were unable to recover it. 00:25:56.757 [2024-12-09 18:15:19.590920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.590966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.591187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.591232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.591446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.591491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.591711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.591757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.591914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.591958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.592107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.592154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.592307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.592352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.592488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.592533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.592701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.592748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.592934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.592979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.593118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.593162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.593306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.593356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.593562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.593609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.593768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.593814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.593955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.594000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.594185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.594231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.594447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.594492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.594676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.594722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.594904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.594949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.595163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.595217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.595434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.595480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.595704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.595750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.595900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.595944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.596120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.596165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.596347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.596393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.596572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.596618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.596808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.596855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.597044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.597093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.597292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.597339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.597500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.597570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.597767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.597815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.597990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.598039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.598232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.598284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.598454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.598499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.598731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.598777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.598964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.599008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.599213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.599277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.599445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.599522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.599787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.599852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.600091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.600156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.600366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.600429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.600599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.600685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.600926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.600972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.601199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.601247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.601459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.601535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.601825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.601890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.602140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.602206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.602443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.602507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.758 [2024-12-09 18:15:19.602735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.758 [2024-12-09 18:15:19.602800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.758 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.603098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.603146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.603424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.603488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.603730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.603795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.603985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.604048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.604311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.604375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.604574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.604622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.604799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.604867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.605060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.605109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.605284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.605332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.605581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.605631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.605858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.605915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.606114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.606162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.606357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.606407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.606607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.606657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.606880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.606928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.607117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.607165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.607389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.607438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.607629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.607679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.607870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.607919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.608061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.608110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.608284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.608332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.608476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.608524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.608772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.608845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.609052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.609103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.609311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.609361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.609591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.609640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.609863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.609911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.610164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.610215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.610425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.610488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.610735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.610782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.610990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.611040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.611239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.611288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.611483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.611561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.611780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.611828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.612013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.612062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.612301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.612348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.612528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.612587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.612793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.612865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.613068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.613120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.613309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.613362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.613584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.613638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.613836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.613889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.614094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.614149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.614361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.614413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.614574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.614628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.614834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.614887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.615091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.615142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.615378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.615429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.615588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.615642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.615813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.615866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.616103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.759 [2024-12-09 18:15:19.616166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.759 qpair failed and we were unable to recover it. 00:25:56.759 [2024-12-09 18:15:19.616367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.616419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.616666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.616719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.616933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.616984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.617156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.617209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.617411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.617462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.617634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.617689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.617903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.617954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.618156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.618207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.618439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.618490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.618679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.618728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.618880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.618947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.619186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.619238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.619456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.619519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.619772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.619841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.620018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.620073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.620310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.620361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.620604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.620682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.620908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.620961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.621144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.621193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.621434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.621486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.621713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.621768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.622013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.622065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.622299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.622350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.622559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.622612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.622815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.622868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.623085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.623136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.623340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.623393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.623580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.623634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.623841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.623892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.624090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.624155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.624348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.624395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.624621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.624670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.624864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.624912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.625105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.625154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.625347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.625398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.625598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.625644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.625791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.625835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.625968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.626010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.626181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.626224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.626429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.626480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.626649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.626694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.626914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.626958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.627083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.627125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.627294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.627337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.627506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.627558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.627712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.627756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.627974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.760 [2024-12-09 18:15:19.628026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.760 qpair failed and we were unable to recover it. 00:25:56.760 [2024-12-09 18:15:19.628215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.628266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.628467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.628519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.628746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.628799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.628966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.629019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.629185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.629237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.629406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.629458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.629758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.629826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.630076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.630140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.630363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.630414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.630671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.630736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.630950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.631028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.631245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.631299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.631465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.631518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.631696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.631772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.632029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.632096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.632330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.632381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.632620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.632685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.632901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.632967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.633181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.633232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.633410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.633464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.633720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.633790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.634058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.634123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.634319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.634372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.634624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.634690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.634964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.635028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.635247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.635298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.635504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.635565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.635807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.635872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.636155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.636219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.636439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.636490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.636711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.636777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.636953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.637029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.637256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.637319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.637572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.637662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.637862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.637927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.638218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.638281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.638513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.638613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.638810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.638863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.639022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.639074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.639231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.639285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.639477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.639528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.639763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.639815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.639976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.640027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.640185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.640235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.640437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.640489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.640701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.640755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.640967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.641021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.641284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.641349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.641578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.641636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.641815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.641866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.642039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.761 [2024-12-09 18:15:19.642091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.761 qpair failed and we were unable to recover it. 00:25:56.761 [2024-12-09 18:15:19.642282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.642336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.642505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.642576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.642830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.642887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.643098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.643153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.643349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.643404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.643572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.643629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.643837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.643892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.644048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.644108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.644326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.644392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.644573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.644629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.644799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.644856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.645032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.645088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.645299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.645355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.645541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.645629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.645863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.645920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.646093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.646150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.646339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.646394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.646609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.646672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.646797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.646836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.646955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.646993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.647189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.647246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.647459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.647500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.647645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.647685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.647876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.647931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.648113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.648169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.648341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.648398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.648607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.648646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.648803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.648842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.649051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.649101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.649267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.649324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.649476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.649514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.649650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.649690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.649882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.649940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.650177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.650234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.650447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.650501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.650686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.650727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.650880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.650932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.651098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.651150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.651436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.651501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.651700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.651739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.651932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.651999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.652178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.652257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.652503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.652542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.652717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.652755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.652914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.652952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.653107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.653147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.653363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.653426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.653649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.653690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.653913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.654000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.654218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.654270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.654470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.654509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.654680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.654720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.762 qpair failed and we were unable to recover it. 00:25:56.762 [2024-12-09 18:15:19.654845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.762 [2024-12-09 18:15:19.654903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.655148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.655201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.655386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.655436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.655625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.655667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.655798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.655838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.655987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.656025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.656149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.656189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.656360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.656412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.656614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.656654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.656785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.656824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.657086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.657151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.657362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.657415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.657624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.657663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.657883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.657950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.658142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.658195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.658391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.658442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.658679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.658718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.658853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.658892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.659105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.659156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.659392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.659443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.659629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.659671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.659803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.659861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.660076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.660128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.660305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.660357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.660529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.660595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.660778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.660817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.661073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.661137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.661334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.661385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.661559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.661623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.661784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.661823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.661946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.661985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.662103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.662143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.662274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.662313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.662435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.662475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.662646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.662686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.662836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.662874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.662991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.663036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.663251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.663304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.663503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.663588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.663764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.663803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.664000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.664052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.664278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.664342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.664569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.664632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.664767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.763 [2024-12-09 18:15:19.664805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.763 qpair failed and we were unable to recover it. 00:25:56.763 [2024-12-09 18:15:19.664958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.664996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.665154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.665192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.665374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.665426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.665646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.665686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.665839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.665880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.666124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.666175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.666394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.666446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.666633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.666673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.666796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.666835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.667071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.667134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.667388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.667451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.667653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.667694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.667835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.667911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.668085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.668165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.668419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.668483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.668688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.668727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.668890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.668944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.669174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.669238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.669441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.669505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.669763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.669801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.670016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.670055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.670292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.670357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.670562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.670602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.670749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.670788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.671025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.671104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.671386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.671445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.671676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.671717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.671884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.671969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.672165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.672240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.672428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.672500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.672706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.672745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.672865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.672905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.673111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.673192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.673414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.673473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.673596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.673637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.673772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.673810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.674031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.674081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.674298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.674358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.674581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.674639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.674797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.674836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.675063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.675114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.675336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.675388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.675603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.675644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.675770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.675808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.675986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.676042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.676293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.676348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.676539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.676615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.676738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.676776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.676972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.677029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.677277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.677332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.677540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.677591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.677749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.677787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.677993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.678048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.678286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.764 [2024-12-09 18:15:19.678344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.764 qpair failed and we were unable to recover it. 00:25:56.764 [2024-12-09 18:15:19.678591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.678631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.678799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.678837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.679001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.679058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.679310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.679366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.679589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.679634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.679768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.679809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.680018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.680074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.680259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.680316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.680529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.680625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.680818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.680885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.681058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.681117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.681232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.681272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.681474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.681528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.681706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.681761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.681916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.681971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.682183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.682240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.682415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.682470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.682665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.682723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.682931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.682994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.683141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.683180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.683349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.683404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.683579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.683643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.683814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.683869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.684078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.684133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.684296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.684352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.684568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.684624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.684836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.684892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.685064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.685119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.685284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.685340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.685536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.685609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.685771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.685828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.686057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.686112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.686367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.686422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.686590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.686647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.686834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.686889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.687079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.687132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.687316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.687373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.687573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.687639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.687859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.687913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.688144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.688203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.688363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.688417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.688623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.688680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.688913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.688968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.689188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.689228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.689368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.689406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.689600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.689686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.689888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.689943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.690157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.690213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.690384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.690440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.690675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.690732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.690938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.690995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.691191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.691245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.691469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.691529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.691787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.691844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.692012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.692066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.765 [2024-12-09 18:15:19.692281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.765 [2024-12-09 18:15:19.692335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.765 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.692561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.692618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.692833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.692871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.693032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.693108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.693321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.693377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.693631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.693688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.693940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.693979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.694110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.694150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.694342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.694399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.694617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.694665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.694828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.694867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.695065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.695121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.695365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.695420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.695647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.695704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.695916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.695972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.696156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.696211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.696438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.696492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.696750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.696830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.697020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.697079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.697296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.697354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.697571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.697628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.697848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.697902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.698097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.698175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.698451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.698525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.698781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.698858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.699116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.699191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.699493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.699617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.699904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.699979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.700288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.700363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.700634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.700696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.700916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.700982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.701213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.701271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.701478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.701532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.701716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.701771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.702022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.702098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.702363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.702439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.702669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.702746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.703015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.703080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.703385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.703459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.703750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.703826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.704128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.704208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.704467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.704535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.704789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.704844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.705066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.705119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.705410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.705474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.705767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.705842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.706070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.706144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.706483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.706606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.706874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.706949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.707180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.707253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.707590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.707667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.708007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.708091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.708366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.708428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.708702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.708764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.708964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.709022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.709254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.766 [2024-12-09 18:15:19.709313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.766 qpair failed and we were unable to recover it. 00:25:56.766 [2024-12-09 18:15:19.709535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.709609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.709858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.709952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.710227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.710306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.710640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.710724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.711010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.711091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.711372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.711451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.711754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.711835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.712156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.712238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.712577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.712660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.712910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.712989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.713279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.713361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.713689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.713771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.714056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.714134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.714413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.714492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.714841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.714905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.715095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.715153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.715446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.715508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.715779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.715839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.716075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.716133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.716385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.716472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.716812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.716891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.717169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.717249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.717565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.717653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.717995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.718074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.718395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.718476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.718747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.718832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.719126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.719188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.719374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.719434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.719673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.719745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.719915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.719973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.720152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.720210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.720463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.720560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.720855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.720934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.721233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.721313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.721630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.721714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.722039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.722126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.722390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.722477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.722796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.722883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.723181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.723249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.723489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.723580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.723763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.723826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.724083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.724146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.724440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.724504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.724756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.724841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.725151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.725236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.725496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.725606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.725965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.726051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.726367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.726454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.726823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.726913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.727215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.727301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.767 qpair failed and we were unable to recover it. 00:25:56.767 [2024-12-09 18:15:19.727622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-09 18:15:19.727692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.727910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.727974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.728236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.728298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.728565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.728629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.728888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.728952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.729247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.729333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.729622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.729708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.730021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.730109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.730450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.730537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.730829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.730916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.731182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.731269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.731630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.731719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.732035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.732124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.732396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.732483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.732822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.732904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.733182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.733265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.733531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.733639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.733918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.733999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.734346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.734429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.734778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.734868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.735151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.735212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.735467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.735527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.735844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.735905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.736143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.736203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.736424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.736504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.736796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.736880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.737204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.737288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.737613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.737700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.738050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.738137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.738483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.738584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.738905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.738992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.739331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.739395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.739608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.739695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.740032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.740096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.740302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.740361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.740577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.740659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.740891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.740976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.741210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.741290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.741629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.741734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.742034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.742120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.742411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.742493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.742835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.742938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.743285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.743369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.743651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.743733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.744077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.744191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.744477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.744570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.744830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.744922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.745230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.745298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.745567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.745631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.745917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.745997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.746309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.746372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.746637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.746722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.747070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.747186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.747509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.747654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.748011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.748098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.748476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.748612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.748916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-09 18:15:19.749003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.768 qpair failed and we were unable to recover it. 00:25:56.768 [2024-12-09 18:15:19.749351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.769 [2024-12-09 18:15:19.749439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.769 qpair failed and we were unable to recover it. 00:25:56.769 [2024-12-09 18:15:19.749774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.769 [2024-12-09 18:15:19.749839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.769 qpair failed and we were unable to recover it. 00:25:56.769 [2024-12-09 18:15:19.750109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.769 [2024-12-09 18:15:19.750175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.769 qpair failed and we were unable to recover it. 00:25:56.769 [2024-12-09 18:15:19.750461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.769 [2024-12-09 18:15:19.750521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.769 qpair failed and we were unable to recover it. 00:25:56.769 [2024-12-09 18:15:19.750789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.769 [2024-12-09 18:15:19.750848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.769 qpair failed and we were unable to recover it. 00:25:56.769 [2024-12-09 18:15:19.751061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.769 [2024-12-09 18:15:19.751124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.769 qpair failed and we were unable to recover it. 00:25:56.769 [2024-12-09 18:15:19.751423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.769 [2024-12-09 18:15:19.751500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:56.769 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.751832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.751920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.752303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.752392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.752758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.752840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.753194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.753302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.753671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.753755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.754074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.754163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.754477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.754575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.754856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.754921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.755199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.755291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.755608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.755688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.755923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.756018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.756309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.756388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.756744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.756837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.757183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.757273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.757588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.757678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.757988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.758075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.758371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.758441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.758681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.758747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.759007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.759071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.759299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.759363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.759620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.759685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.759941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.760004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.760243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.760306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.760577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.760641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.760931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.760993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.761289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.761353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.761593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.761658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.761961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.762024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.762270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.762332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.762630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.762694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.762944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.037 [2024-12-09 18:15:19.763007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.037 qpair failed and we were unable to recover it. 00:25:57.037 [2024-12-09 18:15:19.763219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.763281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.763490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.763581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.763811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.763874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.764114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.764177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.764424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.764487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.764732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.764798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.765059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.765122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.765416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.765479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.765752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.765817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.766118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.766181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.766386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.766449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.766692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.766757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.766970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.767033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.767254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.767317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.767578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.767644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.767867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.767930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.768219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.768283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.768579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.768643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.768929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.768992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.769244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.769307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.769602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.769665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.769918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.769980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.770222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.770285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.770532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.770608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.770816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.770879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.771164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.771227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.771485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.771576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.771866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.771929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.772180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.772243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.772527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.772612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.772816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.772878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.773166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.773229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.773523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.773604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.773874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.773936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.774183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.774245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.774486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.774566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.774817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.774879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.775140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.775202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.775456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.775519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.038 [2024-12-09 18:15:19.775751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.038 [2024-12-09 18:15:19.775815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.038 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.776050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.776113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.776300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.776361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.776604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.776670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.776927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.776988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.777179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.777241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.777467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.777530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.777846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.777918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.778203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.778265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.778466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.778528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.778797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.778863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.779131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.779193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.779433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.779494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.779765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.779829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.780123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.780186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.780485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.780564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.780818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.780881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.781138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.781201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.781456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.781518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.781823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.781886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.782125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.782188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.782448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.782511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.782796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.782859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.783141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.783204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.783499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.783577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.783795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.783858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.784057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.784120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.784371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.784433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.784685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.784750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.785000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.785063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.785273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.785336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.785570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.785635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.785882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.785945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.786187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.786249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.786474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.786564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.786788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.786851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.787158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.787219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.787463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.787526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.787844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.787907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.788158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.788220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.039 [2024-12-09 18:15:19.788473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.039 [2024-12-09 18:15:19.788535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.039 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.788806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.788869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.789111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.789173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.789468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.789530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.789773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.789835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.790127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.790189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.790483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.790562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.790809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.790871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.791143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.791205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.791396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.791458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.791757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.791820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.792059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.792121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.792321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.792384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.792633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.792697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.792997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.793059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.793283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.793344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.793584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.793649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.793903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.793964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.794157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.794219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.794459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.794523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.794718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.794780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.795036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.795107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.795411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.795474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.795682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.795745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.795994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.796057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.796313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.796379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.796576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.796640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.796883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.796946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.797237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.797300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.797569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.797633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.797868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.797930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.798216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.798277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.798574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.798638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.798911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.798973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.799215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.799276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.799589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.799657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.799906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.799969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.800208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.800270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.800582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.800645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.800926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.040 [2024-12-09 18:15:19.800988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.040 qpair failed and we were unable to recover it. 00:25:57.040 [2024-12-09 18:15:19.801273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.801336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.801632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.801695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.801948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.802011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.802309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.802372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.802624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.802688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.802977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.803040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.803235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.803298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.803486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.803565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.803860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.803923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.804235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.804298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.804592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.804657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.804901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.804964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.805248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.805310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.805567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.805631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.805925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.805987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.806265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.806326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.806612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.806676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.806959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.807021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.807254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.807316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.807605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.807669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.807907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.807969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.808212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.808274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.808531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.808630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.808918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.808983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.809236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.809320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.809628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.809694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.809991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.810069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.810381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.810446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.810670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.810736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.810972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.811036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.811339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.811402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.811654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.811718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.811928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.811990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.812229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.812292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.812512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.812590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.812839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.812902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.813206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.041 [2024-12-09 18:15:19.813269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.041 qpair failed and we were unable to recover it. 00:25:57.041 [2024-12-09 18:15:19.813493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.813573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.813862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.813924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.814153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.814215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.814415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.814478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.814728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.814791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.815075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.815137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.815352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.815415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.815668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.815732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.816029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.816092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.816332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.816396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.816686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.816751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.816968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.817031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.817321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.817392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.817688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.817752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.818031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.818094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.818341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.818403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.818688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.818753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.818997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.819060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.819298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.819360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.819640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.819705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.819955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.820018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.820297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.820359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.820611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.820676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.820928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.820991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.821189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.821253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.821504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.821582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.821789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.821853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.822136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.822197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.822492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.822568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.822797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.822861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.823103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.823164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.823421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.823484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.823751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.823815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.823990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.824052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.824301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.824364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.824578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.824643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.824868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.824930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.825170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.825232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.825437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.825500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.825799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.825871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.826115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.826179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.826426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.826489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.826798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.042 [2024-12-09 18:15:19.826861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.042 qpair failed and we were unable to recover it. 00:25:57.042 [2024-12-09 18:15:19.827158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.827221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.827485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.827562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.827828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.827891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.828173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.828236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.828522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.828623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.828894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.828955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.829237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.829300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.829600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.829665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.829912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.829976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.830270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.830333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.830632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.830699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.830947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.831010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.831225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.831289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.831512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.831589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.831868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.831931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.832220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.832284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.832495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.832584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.832815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.832878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.833132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.833195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.833476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.833539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.833796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.833859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.834108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.834170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.834414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.834479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.834742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.834806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.835068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.835131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.835374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.835437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.835688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.835752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.836043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.836106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.836353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.836416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.836651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.836715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.836955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.837018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.837253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.837316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.837574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.837637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.837922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.837985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.838233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.838296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.838529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.838605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.838892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.838955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.839252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.839316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.839619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.839683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.839888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.839951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.840172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.840236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.840472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.840533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.840838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.043 [2024-12-09 18:15:19.840901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.043 qpair failed and we were unable to recover it. 00:25:57.043 [2024-12-09 18:15:19.841087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.841149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.841432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.841495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.841808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.841872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.842111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.842174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.842460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.842523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.842848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.842912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.843211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.843274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.843577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.843642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.843923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.843987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.844221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.844283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.844566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.844630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.844840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.844907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.845128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.845192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.845436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.845499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.845733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.845797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.846036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.846099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.846336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.846399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.846613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.846678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.846968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.847031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.847316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.847379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.847574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.847635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.847884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.847956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.848221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.848285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.848525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.848620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.848898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.848960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.849156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.849218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.849466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.849529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.849799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.849861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.850097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.850159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.850374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.850437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.850703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.850766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.850953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.851015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.851255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.851320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.851601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.851666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.851862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.851925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.852194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.852257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.852592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.852657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.852940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.853003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.853227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.853289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.853532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.853608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.853896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.853959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.854212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.854274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.854531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.044 [2024-12-09 18:15:19.854607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.044 qpair failed and we were unable to recover it. 00:25:57.044 [2024-12-09 18:15:19.854804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.854866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.855160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.855221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.855462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.855524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.855824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.855888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.856151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.856213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.856407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.856479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.856771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.856837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.857082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.857144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.857388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.857451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.857705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.857768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.857972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.858035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.858268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.858331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.858587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.858651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.858905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.858967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.859250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.859312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.859576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.859640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.859913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.859976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.860228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.860290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.860519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.860595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.860895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.860958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.861195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.861258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.861504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.861578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.861861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.861924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.862212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.862275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.862519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.862599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.862855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.862917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.863217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.863280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.863577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.863641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.863898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.863961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.864251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.864314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.864578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.864641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.864848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.864912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.865148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.865228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.865471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.865533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.865825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.865888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.866143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.866207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.866489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.866562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.045 qpair failed and we were unable to recover it. 00:25:57.045 [2024-12-09 18:15:19.866772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.045 [2024-12-09 18:15:19.866834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.867086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.867150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.867438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.867500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.867779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.867842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.868098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.868161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.868410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.868471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.868770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.868834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.869049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.869113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.869304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.869367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.869585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.869651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.869854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.869916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.870184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.870247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.870501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.870575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.870814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.870876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.871128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.871191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.871477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.871539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.871841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.871903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.872106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.872168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.872455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.872518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.872787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.872851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.873103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.873169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.873458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.873520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.873842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.873905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.874205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.874290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.874639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.874728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.875031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.875119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.875476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.875583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.875908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.875994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.876351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.876436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.876756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.876844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.877142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.877212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.877469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.877534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.877772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.877836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.878087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.878149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.878447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.878511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.878742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.878824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.879168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.879267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.879611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.879685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.879947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.880013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.880318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.880383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.880596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.880663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.880944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.881009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.881280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.046 [2024-12-09 18:15:19.881345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.046 qpair failed and we were unable to recover it. 00:25:57.046 [2024-12-09 18:15:19.881611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.881678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.881934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.881998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.882254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.882317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.882518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.882603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.882847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.882913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.883158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.883221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.883468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.883537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.883853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.883918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.884210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.884274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.884526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.884616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.884835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.884905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.885179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.885242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.885573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.885639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.885835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.885898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.886158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.886221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.886435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.886498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.886784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.886849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.887103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.887170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.887412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.887477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.887801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.887866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.888173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.888237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.888487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.888572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.888836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.888901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.889121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.889188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.889448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.889512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.889777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.889842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.890064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.890141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.890394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.890467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.890797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.890863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.891150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.891213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.891464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.891529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.891754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.891821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.892033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.892106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.892350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.892425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.892690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.892757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.893018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.893082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.893370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.893434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.893713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.893779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.894028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.894095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.894379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.894443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.894745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.894809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.895059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.895127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.895378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.895442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.895738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.895803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.047 qpair failed and we were unable to recover it. 00:25:57.047 [2024-12-09 18:15:19.896047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.047 [2024-12-09 18:15:19.896110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.896406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.896469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.896712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.896782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.897072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.897136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.897353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.897421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.897672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.897738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.897954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.898017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.898266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.898329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.898563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.898637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.898892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.898956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.899252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.899316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.899574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.899639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.899819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.899882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.900166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.900228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.900513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.900597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.900846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.900922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.901188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.901252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.901504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.901586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.901846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.901910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.902150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.902216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.902443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.902518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.902790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.902854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.903102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.903165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.903427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.903490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.903702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.903766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.904060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.904125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.904420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.904484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.904801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.904866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.905154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.905217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.905429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.905515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.905858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.905923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.906229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.906294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.906503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.906587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.906825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.906889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.907091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.907156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.907386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.907451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.907749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.907819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.908030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.908093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.908340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.908404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.908654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.908720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.908927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.908993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.909225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.048 [2024-12-09 18:15:19.909291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.048 qpair failed and we were unable to recover it. 00:25:57.048 [2024-12-09 18:15:19.909575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.909641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.909894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.909957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.910257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.910320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.910529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.910615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.910859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.910924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.911138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.911204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.911393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.911457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.911754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.911818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.912074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.912136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.912362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.912429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.912667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.912740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.913051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.913116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.913367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.913433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.913707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.913773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.914045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.914123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.914371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.914435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.914660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.914743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.915009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.915073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.915318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.915381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.915650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.915728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.915952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.916024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.916274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.916337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.916569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.916636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.916829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.916894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.917128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.917191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.917442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.917511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.917796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.917867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.918125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.918202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.918445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.918511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.918740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.918806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.919050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.919114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.919375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.919442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.919713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.919788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.920018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.920084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.920300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.920366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.920592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.920658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.920882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.920967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.921234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.921315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.921578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.921661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.921848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.921912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.922113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.922172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.922432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.049 [2024-12-09 18:15:19.922491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.049 qpair failed and we were unable to recover it. 00:25:57.049 [2024-12-09 18:15:19.922744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.922806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.923052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.923137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.923446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.923510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.923824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.923904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.924173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.924237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.924542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.924646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.924936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.925005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.925199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.925262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.925619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.925680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.925998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.926061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.926276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.926343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.926637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.926699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.926999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.927058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.927296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.927356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.927542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.927634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.927898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.927966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.928165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.928231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.928489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.928558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.928798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.928880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.929123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.929187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.929424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.929487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.929734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.929796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.930118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.930184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.930474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.930537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.930925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.930989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.931224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.931315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.931604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.931665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.931909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.931975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.932180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.932245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.932469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.932531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.932835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.932895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.933229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.933304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.933567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.933646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.933840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.933917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.934183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.934247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.934562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.934642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.934925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.935006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.935237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.935303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.935506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.935621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.935843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.935936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.936159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.936222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.936497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.936573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.936776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.936834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.937051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.937114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.937378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.937444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.937726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.937787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.938069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.938133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.938386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.938451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.938713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.050 [2024-12-09 18:15:19.938777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.050 qpair failed and we were unable to recover it. 00:25:57.050 [2024-12-09 18:15:19.939036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.939094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.939370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.939437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.939685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.939749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.940044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.940109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.940349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.940413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.940706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.940779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.941022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.941088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.941292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.941368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.941666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.941732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.941972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.942037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.942226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.942289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.942591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.942666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.942900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.942964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.943177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.943242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.943470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.943532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.943785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.943848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.944065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.944140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.944362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.944437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.944781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.944847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.945149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.945213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.945449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.945513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.945783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.945846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.946139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.946203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.946470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.946536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.946810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.946875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.947058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.947122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.947311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.947373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.947663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.947728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.947974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.948047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.948260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.948324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.948616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.948683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.948920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.948982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.949226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.949291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.949579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.949648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.949898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.949963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.950226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.950290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.950498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.950608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.950869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.950934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.951143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.951206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.951433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.951500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.951776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.951852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.952053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.952116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.952402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.952464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.952705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.952771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.952966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.953030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.953248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.953312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.953569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.953636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.953845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.953909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.051 [2024-12-09 18:15:19.954124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.051 [2024-12-09 18:15:19.954189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.051 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.954426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.954489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.954725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.954801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.955060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.955123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.955414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.955478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.955741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.955806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.956047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.956109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.956345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.956413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.956726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.956804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.957054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.957119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.957354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.957417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.957710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.957784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.958050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.958113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.958370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.958438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.958762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.958831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.959032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.959096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.959339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.959407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.959632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.959702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.959940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.960017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.960315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.960379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.960652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.960718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.960954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.961017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.961321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.961386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.961615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.961695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.961950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.962015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.962240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.962313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.962537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.962614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.962859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.962922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.963173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.963247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.963496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.963575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.963822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.963893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.964152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.964215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.964494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.964571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.964785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.964848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.965102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.965165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.965411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.965480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.965787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.965853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.966100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.966163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.966368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.966431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.966712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.966777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.967029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.967095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.967308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.967371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.967644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.967715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.967953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.968016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.968259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.968321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.968573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.968645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.968880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.968946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.969162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.969226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.969458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.969532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.969856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.969938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.970193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.970259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.970502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.970584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.052 qpair failed and we were unable to recover it. 00:25:57.052 [2024-12-09 18:15:19.970852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.052 [2024-12-09 18:15:19.970916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.971165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.971231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.971520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.971614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.971864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.971930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.972184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.972249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.972504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.972586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.972818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.972881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.973085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.973148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.973373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.973439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.973718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.973785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.973984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.974049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.974286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.974350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.974615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.974680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.974911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.974988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.975249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.975318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.975587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.975654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.975857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.975922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.976162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.976226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.976445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.976511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.976757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.976824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.977109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.977178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.977471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.977536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.977809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.977873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.978194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.978258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.978497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.978595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.978912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.978977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.979234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.979298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.979514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.979596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.979850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.979914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.980129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.980205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.980527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.980614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.980891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.980956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.981146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.981211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.981501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.981608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.981882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.981945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.982196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.982263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.982511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.982617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.982882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.982946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.983207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.983272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.983567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.983631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.983938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.984004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.984258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.984325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.984579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.984644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.984848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.984913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.985202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.985266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.985511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.985595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.985847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.053 [2024-12-09 18:15:19.985911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.053 qpair failed and we were unable to recover it. 00:25:57.053 [2024-12-09 18:15:19.986162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.986235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.986528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.986607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.986853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.986917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.987158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.987223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.987469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.987534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.987811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.987875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.988083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.988150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.988406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.988469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.988712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.988778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.988986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.989053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.989279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.989345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.989533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.989633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.989934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.989998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.990252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.990317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.990577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.990644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.990897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.990963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.991231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.991305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.991568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.991634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.991888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.991951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.992192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.992255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.992512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.992624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.992935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.993000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.993234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.993299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.993596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.993662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.993954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.994018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.994217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.994296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.994522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.994606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.994832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.994897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.995175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.995267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.995590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.995694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.996002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.996093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.996450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.996571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.996843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.996912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.997208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.997274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.997498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.997584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.997899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.997962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.998226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.998314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.998635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.998728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.999038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.999125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.999428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.999517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:19.999892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:19.999982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:20.000285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:20.000373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:20.000720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:20.000813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:20.001117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:20.001210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:20.001436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:20.001503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:20.001747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:20.001814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:20.002092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:20.002160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:20.002367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:20.002431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:20.002719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:20.002785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:20.003037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:20.003115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:20.003395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:20.003486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.054 [2024-12-09 18:15:20.003790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.054 [2024-12-09 18:15:20.003877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.054 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.004236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.004325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.004637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.004726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.005054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.005143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.005415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.005503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.005863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.005961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.006198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.006267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.006538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.006628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.006856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.006922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.007159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.007223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.007441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.007504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.007756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.007821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.008039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.008104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.008341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.008429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.008772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.008863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.009090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.009173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.009431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.009515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.009737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.009795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.009954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.010018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.010255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.010320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.010593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.010660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.010927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.011000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.011213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.011265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.011470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.011521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.011732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.011783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.011971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.012021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.012184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.012268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.012594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.012645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.012829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.012900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.013194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.013259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.013507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.013583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.013785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.013871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.014129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.014198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.014348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.014396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.014684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.014733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.014907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.014956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.015153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.015201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.015413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.015478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.015699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.015748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.016001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.016069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.016317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.016365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.016566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.016646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.016847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.016912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.017094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.017158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.017465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.017513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.017702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.017760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.018037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.018101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.018321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.018394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.018624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.018691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.018908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.018971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.019205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.019270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.019575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.019642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.055 [2024-12-09 18:15:20.019911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.055 [2024-12-09 18:15:20.019974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.055 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.020227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.020291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.020585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.020652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.020961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.021025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.021324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.021387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.021636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.021701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.022011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.022075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.022353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.022417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.022683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.022750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.022992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.023056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.023343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.023407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.023626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.023693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.023936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.024000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.024309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.024373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.024684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.024749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.025018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.025082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.025333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.025398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.025675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.025741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.025992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.026056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.026317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.026383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.026715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.026781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.027085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.027149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.027356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.027420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.027707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.027774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.027982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.028045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.028297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.028361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.028615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.028686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.028908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.028974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.029224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.029290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.029536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.029616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.029829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.029894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.030155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.030220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.030467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.030531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.030822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.030899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.031191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.031257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.031520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.031615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.031875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.031940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.032196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.032259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.032521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.032606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.032867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.032933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.033139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.033203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.033437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.033501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.033770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.033835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.034069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.034133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.034343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.034407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.034673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.034738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.034936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.035000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.035281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.035347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.035641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.035707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.035965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.056 [2024-12-09 18:15:20.036030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.056 qpair failed and we were unable to recover it. 00:25:57.056 [2024-12-09 18:15:20.036273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.036337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.036662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.036728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.036997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.037061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.037268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.037332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.037620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.037686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.037938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.038002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.038286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.038350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.038606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.038672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.038911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.038977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.039233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.039297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.039574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.039641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.039899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.039962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.040172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.040239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.040491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.040571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.040792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.040855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.041053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.041118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.041319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.041383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.041623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.041688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.041902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.041966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.042207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.042271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.042571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.042637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.042941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.043006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.043210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.043277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.043533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.043639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.043890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.043956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.044225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.044291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.044573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.044639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.044890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.044955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.045157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.045226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.045476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.045541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.045849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.045913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.046198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.046264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.046515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.046617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.046886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.046950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.047242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.047307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.047510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.047597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.047850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.047915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.048150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.048215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.048459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.048523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.048799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.048863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.049112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.049176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.049435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.049499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.049803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.049871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.050161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.050226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.050428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.050493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.050734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.050799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.050990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.051054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.051338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.051402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.051672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.051734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.051969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.052029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.052312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.052372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.052614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.052676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.052937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.052996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.053283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.053343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.053533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.053622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.053798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.057 [2024-12-09 18:15:20.053852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.057 qpair failed and we were unable to recover it. 00:25:57.057 [2024-12-09 18:15:20.054066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.054122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.054382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.054438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.054642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.054698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.054900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.054955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.055129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.055183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.055364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.055419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.055644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.055701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.055998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.056087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.056380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.056444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.056724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.056781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.057037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.057100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.057405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.057468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.057696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.057753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.057977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.058035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.058225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.058284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.058514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.058605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.058800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.058884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.059183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.059274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.059525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.059628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.059862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.059929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.060144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.060218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.060483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.060586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.060809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.060869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.061099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.061160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.061400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.061459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.061731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.061788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.062032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.062089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.062398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.062463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.062706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.062762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.063065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.063141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.063349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.063415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.063711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.063768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.064108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.064168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.064460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.064524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.064794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.064850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.065161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.058 [2024-12-09 18:15:20.065227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.058 qpair failed and we were unable to recover it. 00:25:57.058 [2024-12-09 18:15:20.065478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.065542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.330 [2024-12-09 18:15:20.065828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.065903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.330 [2024-12-09 18:15:20.066193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.066257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.330 [2024-12-09 18:15:20.066563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.066642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.330 [2024-12-09 18:15:20.066845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.066900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.330 [2024-12-09 18:15:20.067108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.067188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.330 [2024-12-09 18:15:20.067480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.067565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.330 [2024-12-09 18:15:20.067806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.067902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.330 [2024-12-09 18:15:20.068201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.068261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.330 [2024-12-09 18:15:20.068567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.068625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.330 [2024-12-09 18:15:20.068862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.068919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.330 [2024-12-09 18:15:20.069183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.069259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.330 [2024-12-09 18:15:20.069502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.069596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.330 [2024-12-09 18:15:20.069854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.069920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.330 [2024-12-09 18:15:20.070190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.070255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.330 [2024-12-09 18:15:20.070576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.070641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.330 [2024-12-09 18:15:20.070837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.070905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.330 [2024-12-09 18:15:20.071220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.071279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.330 [2024-12-09 18:15:20.071577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.071642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.330 [2024-12-09 18:15:20.071901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.071957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.330 [2024-12-09 18:15:20.072196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.072260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.330 [2024-12-09 18:15:20.072457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.072512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.330 [2024-12-09 18:15:20.072778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.330 [2024-12-09 18:15:20.072834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.330 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.073165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.073230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.073523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.073627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.073898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.073962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.074181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.074246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.074564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.074630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.074884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.074948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.075191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.075257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.075455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.075519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.075806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.075870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.076137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.076192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.076462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.076526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.076770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.076835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.077095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.077160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.077404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.077469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.077778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.077843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.078106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.078172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.078401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.078469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.078796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.078862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.079142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.079206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.079500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.079585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.079813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.079888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.080185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.080249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.080425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.080488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.080761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.080827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.081144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.081200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.081385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.081440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.081642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.081699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.081927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.081982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.082247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.082313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.082682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.082744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.083044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.083108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.083408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.083473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.083722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.083788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.084084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.084149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.084400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.084464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.084735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.084797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.085046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.085113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.085407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.085472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.331 [2024-12-09 18:15:20.085705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.331 [2024-12-09 18:15:20.085772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.331 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.085965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.086029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.086269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.086334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.086688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.086754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.087048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.087112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.087426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.087490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.087845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.087910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.088167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.088231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.088515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.088598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.088860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.088924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.089228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.089293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.089512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.089612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.089908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.089971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.090212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.090276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.090583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.090651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.090942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.091005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.091310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.091369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.091636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.091712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.092026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.092090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.092276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.092339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.092594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.092660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.092917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.092981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.093245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.093311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.093607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.093672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.093885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.093950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.094248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.094312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.094611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.094677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.094954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.095019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.095262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.095327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.095602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.095659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.095848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.095934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.096252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.096319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.096613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.096678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.096961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.097025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.097275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.097341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.097609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.097675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.097934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.097989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.098238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.098293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.098727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.098793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.332 [2024-12-09 18:15:20.099083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.332 [2024-12-09 18:15:20.099147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.332 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.099473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.099536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.099759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.099827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.100100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.100166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.100386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.100451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.100792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.100856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.101102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.101167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.101477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.101542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.101826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.101889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.102146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.102210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.102459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.102524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.102818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.102882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.103145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.103209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.103409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.103473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.103737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.103802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.104112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.104176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.104375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.104443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.104751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.104817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.105048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.105114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.105392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.105457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.105744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.105808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.106062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.106125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.106376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.106440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.106719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.106784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.107079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.107144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.107389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.107456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.107687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.107753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.107999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.108064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.108280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.108343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.108579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.108645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.108936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.109000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.109304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.109367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.109598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.109664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.109972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.110031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.110337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.110399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.110703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.110768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.111030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.111095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.111387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.111449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.111742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.111809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.112100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.333 [2024-12-09 18:15:20.112165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.333 qpair failed and we were unable to recover it. 00:25:57.333 [2024-12-09 18:15:20.112408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.112470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.112734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.112793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.113098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.113162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.113460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.113523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.113748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.113813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.114062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.114127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.114394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.114455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.114797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.114874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.115118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.115183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.115425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.115488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.115694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.115760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.116053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.116117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.116365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.116432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.116739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.116805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.117093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.117156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.117477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.117536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.117816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.117880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.118162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.118224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.118409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.118485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.118762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.118827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.119120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.119182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.119460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.119525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.119799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.119864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.120099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.120165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.120381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.120445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.120674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.120740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.120988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.121053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.121302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.121366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.121630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.121696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.121943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.122007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.122294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.122358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.122598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.122665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.122889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.122953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.334 [2024-12-09 18:15:20.123240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.334 [2024-12-09 18:15:20.123303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.334 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.123574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.123640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.123899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.123964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.124227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.124292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.124580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.124656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.124943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.125006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.125315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.125380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.125628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.125690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.125882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.125938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.126124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.126181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.126406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.126461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.126687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.126743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.126942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.127020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.127280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.127344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.127591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.127657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.127885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.127944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.128189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.128243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.128453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.128511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.128853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.128920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.129114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.129178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.129455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.129515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.129733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.129789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.130073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.130133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.130370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.130427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.130729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.130795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.131051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.131125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.131415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.131474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.131734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.131791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.132032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.132112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.132314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.132368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.132630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.132696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.133002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.133061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.133311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.133365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.133600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.133657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.133884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.133941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.134198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.134252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.134425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.134480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.134678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.134735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.135019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.135091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.135282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.335 [2024-12-09 18:15:20.135337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.335 qpair failed and we were unable to recover it. 00:25:57.335 [2024-12-09 18:15:20.135576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.135643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.135837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.135901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.136213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.136273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.136500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.136590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.136810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.136883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.137092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.137147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.137409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.137474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.137708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.137773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.138092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.138151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.138432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.138487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.138700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.138777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.139028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.139084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.139357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.139420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.139676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.139736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.140032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.140092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.140343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.140396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.140584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.140684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.140901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.140960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.141233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.141299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.141571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.141632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.141865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.141920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.142118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.142172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.142362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.142418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.142680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.142746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.142952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.143016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.143309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.143379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.143593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.143650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.143872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.143947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.144215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.144271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.144479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.144533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.144861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.144917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.145107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.145183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.145379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.145433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.145655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.145712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.145982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.146048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.146331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.146394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.146700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.336 [2024-12-09 18:15:20.146757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.336 qpair failed and we were unable to recover it. 00:25:57.336 [2024-12-09 18:15:20.146977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.147032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.147232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.147287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.147452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.147508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.147695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.147749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.147906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.147960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.148128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.148183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.148404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.148483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.148821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.148888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.149161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.149244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.149485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.149579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.149762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.149816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.150059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.150114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.150360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.150416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.150680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.150745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.151075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.151135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.151367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.151422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.151612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.151668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.151888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.151944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.152143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.152223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.152422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.152488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.152755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.152832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.153038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.153093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.153328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.153392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.153590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.153647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.153866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.153920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.154182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.154247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.154506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.154578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.154793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.154847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.155067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.155131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.155366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.155423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.155699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.155760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.155991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.156050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.156238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.156311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.156511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.156584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.156787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.156842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.157100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.157164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.157353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.157417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.157707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.157767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.158012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.158066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.337 [2024-12-09 18:15:20.158257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.337 [2024-12-09 18:15:20.158317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.337 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.158525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.158608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.158867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.158931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.159192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.159256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.159498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.159579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.159845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.159900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.160134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.160193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.160475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.160529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.160816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.160880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.161147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.161228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.161448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.161512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.161767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.161826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.162072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.162153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.162337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.162391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.162637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.162703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.162919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.162986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.163270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.163335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.163558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.163614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.163789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.163845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.164014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.164069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.164278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.164341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.164598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.164662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.164960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.165024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.165243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.165298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.165511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.165608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.165862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.165917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.166194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.166258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.166540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.166645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.166862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.166919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.167095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.167159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.167471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.167526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.167767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.167831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.168021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.168084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.168294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.168358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.168601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.168657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.168831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.168895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.169164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.169219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.169388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.169442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.169674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.169740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.169948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.338 [2024-12-09 18:15:20.170011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.338 qpair failed and we were unable to recover it. 00:25:57.338 [2024-12-09 18:15:20.170233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.170287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.170463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.170518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.170737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.170792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.171066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.171131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.171382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.171448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.171719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.171775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.172022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.172099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.172320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.172375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.172655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.172720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.172937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.173000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.173252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.173333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.173557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.173629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.173846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.173901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.174092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.174147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.174434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.174498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.174769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.174834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.175133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.175215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.175498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.175576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.175798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.175876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.176189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.176274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.176596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.176673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.176945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.177021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.177302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.177389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.177715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.177805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.178186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.178274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.178584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.178676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.178996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.179066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.179284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.179339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.179570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.179642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.179838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.179892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.180079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.180162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.180417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.180498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.180807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.180882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.339 [2024-12-09 18:15:20.181102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.339 [2024-12-09 18:15:20.181178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.339 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.181482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.181576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.181877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.181954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.182333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.182420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.182803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.182892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.183198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.183266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.183506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.183584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.183767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.183821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.184078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.184133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.184403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.184466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.184777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.184871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.185145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.185241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.185514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.185642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.185911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.185977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.186231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.186317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.186629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.186717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.187082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.187167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.187473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.187542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.187770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.187825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.187996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.188049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.188254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.188314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.188509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.188577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.188749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.188804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.189105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.189168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.189415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.189478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.189754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.189843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.190180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.190254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.190514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.190616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.190842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.190917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.191227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.191313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.191617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.191707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.192075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.192161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.192462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.192559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.192800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.192856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.193091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.193156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.193415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.193478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.193718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.193774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.194070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.194174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.194470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.194558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.194899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.194998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-09 18:15:20.195341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-09 18:15:20.195429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.195750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.195838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.196151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.196237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.196538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.196648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.196954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.197035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.197218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.197271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.197457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.197566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.197825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.197903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.198144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.198205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.198482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.198618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.198916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.199004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.199336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.199436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.199816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.199907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.200180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.200267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.200596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.200686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.201037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.201131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.201384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.201440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.201659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.201713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.201911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.201965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.202237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.202300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.202586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.202670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.203036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.203111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.203456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.203577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.203835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.203947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.204302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.204401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.204766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.204857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.205219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.205294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.205601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.205677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.205936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.206013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.206326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.206413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.206725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.206801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.207131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.207218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.207585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.207676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.207982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.208067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.208258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.208313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.208470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.208522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.208709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.208763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.209029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.209092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.209363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.209459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.209709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.209786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-09 18:15:20.210043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-09 18:15:20.210117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.210466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.210566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.210854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.210970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.211307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.211393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.211762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.211850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.212121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.212201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.212433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.212487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.212674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.212730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.212942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.212995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.213245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.213307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.213612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.213706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.214012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.214085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.214369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.214443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.214696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.214773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.215120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.215210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.215517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.215630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.215950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.216038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.216351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.216444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.216794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.216851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.217126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.217179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.217394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.217448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.217667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.217722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.217980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.218042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.218386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.218461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.218744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.218821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.219160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.219249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.219611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.219704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.220064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.220151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.220512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.220621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.220905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.220962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.221156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.221210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.221473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.221535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.221860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.221924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.222124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.222186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.222444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.222517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.222802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.222877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.223143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.223216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.223531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.223636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.223942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.224028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.224401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-09 18:15:20.224487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-09 18:15:20.224801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.224894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.225178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.225234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.225479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.225533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.225771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.225824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.226092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.226155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.226399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.226466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.226827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.226906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.227183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.227257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.227472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.227566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.227774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.227850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.228085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.228172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.228437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.228522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.228851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.228959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.229273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.229345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.229599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.229659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.229830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.229884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.230101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.230160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.230356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.230410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.230663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.230719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.230976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.231040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.231289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.231351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.231600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.231696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.232001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.232073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.232358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.232443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.232743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.232835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.233148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.233232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.233559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.233647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.234041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.234134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.234462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.234519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.234740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.234794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.234985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.235052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.235318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.235381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.235579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.235643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.235872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.235959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.236269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.236343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.236724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-09 18:15:20.236800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-09 18:15:20.237161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.237249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.237525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.237655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.237953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.238040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.238334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.238439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.238758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.238816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.239053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.239117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.239310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.239364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.239574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.239646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.239866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.239929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.240157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.240229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.240541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.240638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.240982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.241057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.241323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.241399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.241751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.241837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.242132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.242218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.242581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.242657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.242879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.242936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.243154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.243208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.243395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.243449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.243640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.243695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.243904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.243957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.244179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.244246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.244542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.244666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.244948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.245021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.245234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.245308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.245620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.245708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.246034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.246110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.246441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.246528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.246831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.246922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.247223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.247281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.247526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.247607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.247836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.247916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.248180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.248242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.248499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.248577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.248843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.248932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.249209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.249301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.249568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.249644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.249926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.249992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.250281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.250370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-09 18:15:20.250696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-09 18:15:20.250785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.251140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.251228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.251533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.251625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.251908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.251971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.252229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.252325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.252562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.252623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.252823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.252878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.253111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.253175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.253423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.253486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.253756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.253844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.254113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.254186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.254488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.254604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.254908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.254996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.255314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.255402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.255764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.255851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.256197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.256290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.256561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.256619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.256791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.256846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.257038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.257092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.257362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.257426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.257735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.257801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.257997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.258080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.258403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.258476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.258738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.258812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.259106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.259192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.259538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.259642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.259901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.259987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.260286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.260372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.260719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.260808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.261163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.261238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.261505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.261599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.261884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.261960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.262296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.262383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.262704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.262791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.263105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.263192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.263574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.263634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.263868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.263948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.264195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.264248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.264499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.264580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.264791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.264854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-09 18:15:20.265144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-09 18:15:20.265232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.265529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.265643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.265915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.265991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.266267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.266354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.266669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.266758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.267104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.267190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.267526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.267620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.267912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.267988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.268172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.268226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.268453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.268507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.268739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.268796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.269022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.269085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.269328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.269415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.269748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.269823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.270122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.270222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.270580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.270669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.270968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.271055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.271403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.271488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.271853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.271921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.272203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.272287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.272537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.272630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.272840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.272895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.273090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.273144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.273375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.273437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.273702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.273767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.274066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.274138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.274454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.274527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.274825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.274899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.275178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.275265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.275592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.275681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.275996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.276083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.276454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.276542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.276852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.276934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.277167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.277222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.277466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.277520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.277731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.277789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.278024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.278087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.278303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.278374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.278653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.278754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.279091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.346 [2024-12-09 18:15:20.279165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.346 qpair failed and we were unable to recover it. 00:25:57.346 [2024-12-09 18:15:20.279425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.279512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.279811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.279899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.280253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.280339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.280648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.280718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.280968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.281023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.281208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.281262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.281521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.281642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.281892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.281955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.282163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.282225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.282487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.282572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.282872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.282949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.283236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.283298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.283536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.283614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.283861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.283924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.284171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.284233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.284474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.284536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.284846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.284910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.285212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.285275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.285464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.285526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.285738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.285801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.286102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.286164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.286445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.286507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.286781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.286844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.287053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.287116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.287323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.287389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.287623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.287688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.287950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.288012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.288298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.288361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.288612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.288677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.288957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.289019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.289224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.289287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.289601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.289667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.289941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.290003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.290225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.290287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.290599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.290664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.290964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.291018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.291225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.291305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.291600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.291665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.291864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.291927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.347 [2024-12-09 18:15:20.292119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.347 [2024-12-09 18:15:20.292182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.347 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.292405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.292469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.292774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.292838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.293040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.293103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.293388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.293451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.293723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.293786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.294027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.294090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.294330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.294393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.294642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.294707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.294950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.295013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.295297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.295361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.295607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.295673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.295889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.295953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.296199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.296263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.296502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.296578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.296853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.296916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.297165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.297227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.297441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.297503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.297743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.297807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.298090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.298153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.298430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.298493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.298744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.298807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.299074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.299137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.299356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.299419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.299629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.299693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.299907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.299970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.300206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.300269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.300619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.300684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.300967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.301030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.301290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.301352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.301602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.301667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.301889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.301952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.302252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.302313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.302566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.302627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.302882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.302943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.348 [2024-12-09 18:15:20.303145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.348 [2024-12-09 18:15:20.303215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.348 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.303460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.303521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.303798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.303859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.304101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.304162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.304402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.304464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.304765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.304827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.305145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.305205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.305447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.305507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.305789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.305851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.306127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.306187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.306464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.306524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.306791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.306852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.307096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.307159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.307442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.307502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.307842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.307927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.308276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.308361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.308675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.308759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.309072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.309156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.309493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.309594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.309900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.309982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.310248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.310333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.310647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.310733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.311071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.311136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.311381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.311433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.311721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.311775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.312007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.312067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.312308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.312368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.312618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.312669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.312815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.312858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.313020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.313063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.313222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.313265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.313426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.313470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.313667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.313712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.313850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.313893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.314070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.314114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.314291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.314335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.314467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.314510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.314659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.314703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.314847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.314892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.315090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.349 [2024-12-09 18:15:20.315134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.349 qpair failed and we were unable to recover it. 00:25:57.349 [2024-12-09 18:15:20.315282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.315325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.315505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.315559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.315751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.315781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.315897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.315927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.316030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.316060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.316155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.316185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.316323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.316364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.316567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.316610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.316776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.316816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.316984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.317027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.317151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.317194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.317327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.317368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.317527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.317579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.317736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.317770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.317903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.317940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.318073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.318103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.318214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.318245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.318353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.318383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.318487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.318517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.318644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.318675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.318773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.318801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.318885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.318915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.319060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.319090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.319212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.319240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.319339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.319368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.319494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.319523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.319658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.319687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.319793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.319822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.319952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.319980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.320114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.320143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.320275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.320303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.320408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.320436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.320566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.320596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.320701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.320730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.320886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.320915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.321034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.321063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.321194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.321223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.321352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.321380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.321469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.321498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.350 [2024-12-09 18:15:20.321660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.350 [2024-12-09 18:15:20.321691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.350 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.321831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.321860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.321956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.321985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.322159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.322188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.322281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.322310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.322410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.322438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.322554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.322600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.322696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.322724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.322854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.322881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.323004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.323031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.323150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.323178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.323308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.323336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.323463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.323490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.323587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.323616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.323746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.323775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.323927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.323954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.324067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.324113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.324229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.324259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.324364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.324393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.324501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.324529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.324643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.324672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.324764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.324792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.324888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.324917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.325072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.325101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.325256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.325285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.325415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.325444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.325554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.325598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.325691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.325718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.325818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.325845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.325929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.325955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.326065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.326091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.326206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.326234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.326383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.326409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.326500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.326527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.326649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.326676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.326806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.326832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.326949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.326975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.327103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.327130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.327258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.327285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.327400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.327427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.351 qpair failed and we were unable to recover it. 00:25:57.351 [2024-12-09 18:15:20.327525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.351 [2024-12-09 18:15:20.327560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.352 qpair failed and we were unable to recover it. 00:25:57.352 [2024-12-09 18:15:20.327659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.352 [2024-12-09 18:15:20.327686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.352 qpair failed and we were unable to recover it. 00:25:57.352 [2024-12-09 18:15:20.327789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.352 [2024-12-09 18:15:20.327816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.352 qpair failed and we were unable to recover it. 00:25:57.352 [2024-12-09 18:15:20.327952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.352 [2024-12-09 18:15:20.327979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.352 qpair failed and we were unable to recover it. 00:25:57.352 [2024-12-09 18:15:20.328102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.352 [2024-12-09 18:15:20.328128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.352 qpair failed and we were unable to recover it. 00:25:57.352 [2024-12-09 18:15:20.328208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.352 [2024-12-09 18:15:20.328234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.352 qpair failed and we were unable to recover it. 00:25:57.352 [2024-12-09 18:15:20.328352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.352 [2024-12-09 18:15:20.328379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.352 qpair failed and we were unable to recover it. 00:25:57.352 [2024-12-09 18:15:20.328530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.352 [2024-12-09 18:15:20.328565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.352 qpair failed and we were unable to recover it. 00:25:57.352 [2024-12-09 18:15:20.328684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.352 [2024-12-09 18:15:20.328711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.352 qpair failed and we were unable to recover it. 00:25:57.352 [2024-12-09 18:15:20.328826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.352 [2024-12-09 18:15:20.328853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.352 qpair failed and we were unable to recover it. 00:25:57.352 [2024-12-09 18:15:20.328979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.352 [2024-12-09 18:15:20.329006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.352 qpair failed and we were unable to recover it. 00:25:57.352 [2024-12-09 18:15:20.329085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.352 [2024-12-09 18:15:20.329112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.352 qpair failed and we were unable to recover it. 00:25:57.352 [2024-12-09 18:15:20.329225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.352 [2024-12-09 18:15:20.329251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.352 qpair failed and we were unable to recover it. 00:25:57.352 [2024-12-09 18:15:20.329340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.352 [2024-12-09 18:15:20.329366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.352 qpair failed and we were unable to recover it. 00:25:57.352 [2024-12-09 18:15:20.329485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.352 [2024-12-09 18:15:20.329511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.352 qpair failed and we were unable to recover it. 00:25:57.352 [2024-12-09 18:15:20.329629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.352 [2024-12-09 18:15:20.329656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.352 qpair failed and we were unable to recover it. 00:25:57.352 [2024-12-09 18:15:20.329771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.352 [2024-12-09 18:15:20.329798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.352 qpair failed and we were unable to recover it. 00:25:57.352 [2024-12-09 18:15:20.329889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.352 [2024-12-09 18:15:20.329915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1582784 Killed "${NVMF_APP[@]}" "$@" 00:25:57.610 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:57.610 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:57.610 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:57.610 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:57.610 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:57.610 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1583339 00:25:57.610 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:57.610 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1583339 00:25:57.610 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1583339 ']' 00:25:57.610 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.610 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:57.610 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.610 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:57.610 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:57.891 [2024-12-09 18:15:20.673228] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:25:57.891 [2024-12-09 18:15:20.673308] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.714818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bcf30 (9): Bad file descriptor 00:25:57.891 [2024-12-09 18:15:20.715102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.715149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.715254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.715280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.715401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.715426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.715580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.715606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.715703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.715727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.715817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.715842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.715989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.716014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.716125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.716149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.716244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.716269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.716395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.716419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.716566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.716591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.716686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.716711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.716801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.716825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.716939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.716963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.717052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.717076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.717188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.717213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.717294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.717324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.717443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.717468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.717574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.717600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.717688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.717712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.717805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.717829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.717996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.718033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.718152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.718178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.718291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.718316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.718407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.718432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.718522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.718581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-12-09 18:15:20.718680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-12-09 18:15:20.718707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.718829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.718854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.718946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.718971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.719083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.719108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.719238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.719266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.719352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.719378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.719495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.719520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.719621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.719647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.719736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.719761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.719843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.719868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.719949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.719977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.720089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.720115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.720226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.720252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.720336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.720361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.720474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.720499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.720644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.720671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.720758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.720784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.720898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.720927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.721037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.721062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.721143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.721170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.721286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.721311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.721450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.721475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.721561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.721588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.721700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.721726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.721802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.721827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.721979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.722006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.722095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.722120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.722202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.722227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.722304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.722329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.722444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.722469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.722552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.722578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.722696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.722721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.722800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.722826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.722909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.722935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.723020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.723046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.723157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.723182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.723293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.723334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.723424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.723451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.723568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.723595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.723708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-12-09 18:15:20.723733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-12-09 18:15:20.723820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.723845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.723960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.723986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.724064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.724092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.724211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.724241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.724358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.724392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.724479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.724506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.724602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.724630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.724714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.724740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.724817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.724843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.724949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.724974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.725060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.725086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.725200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.725226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.725316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.725343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.725421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.725447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.725567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.725596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.725691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.725719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.725857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.725883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.726002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.726029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.726150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.726176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.726266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.726292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.726429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.726455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.726607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.726635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.726747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.726773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.726851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.726877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.726961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.726986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.727102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.727127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.727264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.727290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.727379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.727406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.727522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.727555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.727641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.727667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.727777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.727802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.727884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.727909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.727996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.728022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.728158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.728183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.728271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.728296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.728379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.728404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.728525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.728560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.728683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.728711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.728827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-12-09 18:15:20.728852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-12-09 18:15:20.728987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.729012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.729096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.729122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.729276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.729303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.729383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.729409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.729530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.729564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.729676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.729707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.729842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.729867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.729952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.729977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.730074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.730099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.730184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.730211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.730344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.730384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.730532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.730582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.730670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.730696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.730816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.730842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.730955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.730981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.731069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.731095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.731209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.731236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.731318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.731343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.731421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.731447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.731565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.731592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.731678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.731704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.731793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.731819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.731960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.731986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.732074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.732099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.732185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.732211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.732321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.732349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.732454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.732479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.732575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.732605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.732718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.732744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.732884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.732910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.733023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.733049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.733163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.733190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.733303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.733333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.733410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.733436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.733520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.733551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.733644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.733670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.733752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.733778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.733889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.733915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.734005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-12-09 18:15:20.734031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-12-09 18:15:20.734143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.734169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.734253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.734278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.734386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.734412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.734512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.734568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.734664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.734692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.734808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.734835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.734975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.735001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.735111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.735137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.735253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.735279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.735394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.735422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.735535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.735567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.735677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.735703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.735794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.735820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.735903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.735928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.736043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.736068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.736150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.736177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.736318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.736344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.736484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.736510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.736631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.736658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.736743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.736768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.736923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.736963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.737048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.737075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.737209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.737249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.737381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.737410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.737521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.737554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.737670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.737696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.737808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.737834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.737924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.737951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.738042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.738067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.738154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.738179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.738268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.738294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.738411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.738436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.738579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.738606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-12-09 18:15:20.738698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-12-09 18:15:20.738728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.738843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.738868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.738983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.739008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.739087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.739112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.739219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.739244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.739351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.739378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.739501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.739531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.739630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.739658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.739798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.739825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.739933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.739959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.740040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.740067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.740185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.740211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.740306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.740345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.740474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.740513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.740627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.740655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.740749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.740776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.740856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.740882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.740960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.740986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.741097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.741123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.741246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.741284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.741378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.741404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.741517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.741543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.741664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.741690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.741797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.741822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.741908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.741934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.742021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.742048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.742127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.742153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.742268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.742302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.742387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.742413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.742519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.742550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.742632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.742660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.742800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.742826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.742914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.742939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.743017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.743043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.743183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.743209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.743363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.743402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.743520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.743556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.743673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.743699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.743774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.743799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-12-09 18:15:20.743908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-12-09 18:15:20.743933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.744075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.744100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.744181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.744207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.744298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.744327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.744448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.744476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.744571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.744599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.744741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.744767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.744879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.744905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.744983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.745009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.745124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.745151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.745265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.745293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.745378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.745406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.745521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.745555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.745646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.745672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.745810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.745836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.745950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.745976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.746061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.746087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.746202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.746228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.746336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.746362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.746528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.746580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.746674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.746702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.746784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.746810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.746892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.746918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.747022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.747048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.747171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.747209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.747302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.747330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.747448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.747474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.747587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.747613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.747701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.747732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.747821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.747848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.747966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.747994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.748104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.748131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.748215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.748241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.748332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.748359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.748434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.748460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.748591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.748631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.748751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.748779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.748920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.748946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.749029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.749055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.749133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-12-09 18:15:20.749158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-12-09 18:15:20.749233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.749259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.749394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.749421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.749539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.749573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.749688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.749714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.749827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.749853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.749938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.749964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.750072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.750099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.750189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.750217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.750312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.750340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.750419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.750445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.750524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.750555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.750660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.750686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.750770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.750795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.750909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.750935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.751054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.751081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.751177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.751203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.751319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.751346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.751486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.751512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.751646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.751685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.751779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.751807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.751893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.751920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.752031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.752057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.752181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.752220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.752368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.752395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.752503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.752530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.752618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.752645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.752759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.752785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.752860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.752886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.752971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.752998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.753122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.753151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.753280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.753320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.753405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.753433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.753582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.753609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.753724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.753749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.753821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.753846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.753985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.754011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.754071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:57.898 [2024-12-09 18:15:20.754109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.754136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.754235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.754264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.754405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.754432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-12-09 18:15:20.754517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-12-09 18:15:20.754556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.754643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.754669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.754777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.754803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.754914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.754941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.755018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.755044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.755123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.755148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.755238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.755263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.755385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.755410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.755502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.755530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.755635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.755661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.755746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.755773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.755887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.755913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.755998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.756024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.756152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.756180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.756290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.756316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.756438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.756466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.756555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.756581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.756694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.756720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.756810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.756836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.756947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.756973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.757056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.757083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.757210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.757238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.757358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.757387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.757499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.757526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.757625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.757652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.757746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.757773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.757863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.757890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.758029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.758054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.758141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.758168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.758286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.758317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.758399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.758425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.758540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.758571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.758659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.758687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.758779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.758805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.758883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.758909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.758990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.759015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.759179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.759218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.759342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.759370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.759469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.759497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-12-09 18:15:20.759617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-12-09 18:15:20.759644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.759725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.759751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.759832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.759860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.759975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.760001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.760101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.760127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.760244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.760272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.760390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.760417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.760529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.760561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.760648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.760674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.760762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.760788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.760869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.760895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.760981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.761007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.761144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.761170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.761251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.761277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.761394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.761423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.761517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.761552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.761671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.761696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.761782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.761812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.761925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.761950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.762040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.762065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.762185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.762211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.762330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.762358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.762445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.762471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.762581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.762608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.762690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.762715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.762806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.762833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.762986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.763014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.763101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.763128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.763208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.763234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.763341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.763367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.763480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.763506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.763611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.763637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.763742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.763768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.763911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.763937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-12-09 18:15:20.764017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-12-09 18:15:20.764043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.764125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.764151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.764262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.764288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.764362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.764387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.764495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.764521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.764706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.764734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.764864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.764903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.765023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.765050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.765138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.765164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.765277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.765301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.765451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.765481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.765588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.765627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.765720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.765748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.765862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.765888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.766002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.766028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.766115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.766140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.766225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.766252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.766370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.766395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.766482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.766507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.766597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.766623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.766743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.766772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.766885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.766912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.766992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.767019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.767103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.767139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.767251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.767277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.767367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.767393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.767505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.767531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.767642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.767667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.767779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.767804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.767914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.767939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.768049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.768074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.768192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.768219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.768336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.768362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.768457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.768483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.768594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.768620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.768704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.768730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.768820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.768847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.768966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.768993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.769105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.769132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.769287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-12-09 18:15:20.769326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-12-09 18:15:20.769446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.769472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.769592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.769618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.769708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.769734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.769819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.769844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.769951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.769976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.770067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.770093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.770214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.770243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.770342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.770369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.770452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.770478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.770594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.770621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.770762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.770793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.770909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.770935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.771019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.771045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.771142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.771167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.771257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.771282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.771371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.771397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.771479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.771505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.771624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.771651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.771772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.771798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.771906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.771930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.772047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.772072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.772185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.772213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.772349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.772376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.772454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.772480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.772596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.772623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.772733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.772759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.772844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.772869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.772984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.773009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.773167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.773206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.773303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.773330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.773443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.773471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.773557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.773583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.773664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.773689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.773810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.773835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.773977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.774002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.774113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.774139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.774229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.774256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.774344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.774373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.774467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-12-09 18:15:20.774507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-12-09 18:15:20.774613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.774642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.774781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.774808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.774919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.774947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.775066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.775093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.775184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.775212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.775339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.775379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.775504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.775532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.775627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.775653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.775760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.775786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.775902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.775927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.776023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.776050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.776166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.776198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.776309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.776335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.776420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.776444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.776564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.776590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.776705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.776730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.776814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.776842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.776927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.776953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.777093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.777122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.777211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.777237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.777346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.777373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.777458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.777484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.777574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.777603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.777687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.777714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.777823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.777849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.777935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.777960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.778071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.778097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.778183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.778209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.778298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.778324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.778411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.778438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.778559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.778588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.778685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.778713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.778801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.778827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.778914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.778940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.779049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.779075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.779159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.779186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.779263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.779289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.779393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.779418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.779524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.779562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-12-09 18:15:20.779647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-12-09 18:15:20.779673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.779779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.779804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.779909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.779934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.780043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.780069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.780152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.780177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.780264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.780289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.780399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.780427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.780512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.780537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.780636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.780662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.780771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.780796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.780881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.780907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.781020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.781048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.781174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.781202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.781304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.781343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.781440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.781467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.781561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.781588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.781705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.781731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.781823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.781849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.781931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.781956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.782059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.782085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.782163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.782188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.782297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.782323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.782437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.782464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.782559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.782588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.782678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.782706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.782851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.782877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.782964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.782990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.783100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.783126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.783240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.783266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.783421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.783460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.783557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.783584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.783703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.783728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.783833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.783858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.783972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.783996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.784079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.784107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.784247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-12-09 18:15:20.784276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-12-09 18:15:20.784401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.784440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.784532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.784572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.784692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.784718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.784798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.784823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.784912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.784937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.785071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.785096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.785193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.785221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.785338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.785365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.785476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.785502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.785621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.785649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.785740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.785768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.785895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.785933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.786042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.786069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.786185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.786210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.786285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.786312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.786445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.786471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.786557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.786583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.786676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.786702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.786794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.786819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.786934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.786959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.787041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.787066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.787176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.787201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.787316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.787345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.787429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.787456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.787575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.787605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.787722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.787749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.787890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.787916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.788001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.788027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.788167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.788193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.788305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.788331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.788416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.788447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.788564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.788591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.788699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.788724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.788828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.788853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.788965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.788990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.789101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.789126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.789220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.789258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.789408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.789435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.789655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.789685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.905 qpair failed and we were unable to recover it. 00:25:57.905 [2024-12-09 18:15:20.789774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.905 [2024-12-09 18:15:20.789800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.789913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.789941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.790028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.790054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.790179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.790206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.790306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.790335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.790426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.790454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.790540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.790571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.790660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.790686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.790789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.790816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.790925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.790952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.791042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.791069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.791160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.791189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.791317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.791356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.791447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.791474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.791561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.791587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.791728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.791754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.791830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.791855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.791964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.791992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.792109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.792142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.792236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.792266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.792383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.792408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.792523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.792555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.792653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.792681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.792760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.792785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.792864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.792889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.793026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.793052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.793146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.793172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.793251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.793277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.793396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.793424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.793540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.793575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.793666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.793692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.793802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.793828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.793919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.793946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.794062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.794088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.794176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.794203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.794294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.794321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.794437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.794462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.794564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.794591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.794679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.794704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.906 [2024-12-09 18:15:20.794789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.906 [2024-12-09 18:15:20.794815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.906 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.794899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.794925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.795030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.795056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.795166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.795192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.795279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.795304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.795383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.795408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.795523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.795560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.795649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.795674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.795783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.795809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.795949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.795975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.796060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.796085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.796165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.796191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.796335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.796361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.796449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.796477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.796603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.796631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.796742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.796769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.796877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.796904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.797013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.797039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.797154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.797181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.797294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.797321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.797410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.797436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.797510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.797538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.797637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.797664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.797776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.797803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.797888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.797916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.798003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.798030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.798175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.798204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.798295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.798321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.798417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.798442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.798558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.798584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.798668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.798692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.798807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.798834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.798922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.798950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.799069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.799097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.799214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.799241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.799328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.799354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.799466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.799492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.799615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.799642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.799753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.799780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.799920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.907 [2024-12-09 18:15:20.799946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.907 qpair failed and we were unable to recover it. 00:25:57.907 [2024-12-09 18:15:20.800026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.800053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.800175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.800202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.800297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.800336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.800435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.800463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.800552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.800579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.800666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.800692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.800800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.800832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.800955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.800982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.801069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.801098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.801227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.801266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.801379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.801407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.801505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.801531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.801627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.801653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.801743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.801769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.801881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.801907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.801992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.802020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.802108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.802134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.802211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.802239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.802332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.802359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.802498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.802524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.802651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.802677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.802765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.802791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.802875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.802901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.802979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.803006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.803119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.803144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.803226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.803251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.803366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.803391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.803481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.803506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.803609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.803635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.803714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.803741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.803829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.803854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.803958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.803983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.804067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.804094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.804201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.804234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.804363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.804402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.908 [2024-12-09 18:15:20.804531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.908 [2024-12-09 18:15:20.804565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.908 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.804655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.804681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.804794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.804819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.804900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.804926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.805015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.805041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.805157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.805186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.805271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.805298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.805426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.805465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.805597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.805626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.805740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.805764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.805842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.805868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.805948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.805979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.806051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.806075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.806163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.806191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.806277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.806306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.806403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.806442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.806560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.806588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.806674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.806700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.806785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.806810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.806897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.806924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.807035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.807062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.807172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.807197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.807305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.807332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.807428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.807457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.807580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.807609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.807697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.807723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.807806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.807833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.807916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.807942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.808085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.808111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.808223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.808250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.808334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.808362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.808452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.808479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.808573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.808602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.808716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.808741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.808852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.808878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.808962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.808986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.809065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.809089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.809168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.809193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.809288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.809333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.909 [2024-12-09 18:15:20.809454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.909 [2024-12-09 18:15:20.809481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.909 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.809569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.809598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.809686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.809713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.809803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.809830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.809909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.809935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.810024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.810050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.810160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.810185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.810299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.810325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.810441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.810469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.810569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.810599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.810711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.810737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.810847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.810873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.811017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.811043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.811160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.811186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.811300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.811328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.811439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.811464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.811552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.811580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.811694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.811721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.811811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.811838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.811951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.811978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.812059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.812086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.812175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.812203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.812342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.812368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.812461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.812487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.812579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.812608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.812724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.812763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.812882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.812910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.812991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.813019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.813139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.813165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.813279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.813304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.813398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.813437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.813566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.813593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.813691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.813720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.813801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.813828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.813937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.813962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.814081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.814106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.814190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.814216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.814297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.814323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.814399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.814424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.814513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.910 [2024-12-09 18:15:20.814543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.910 qpair failed and we were unable to recover it. 00:25:57.910 [2024-12-09 18:15:20.814635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.814661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.814744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.814769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.814860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.814888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.814978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.815008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.815098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.815124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.815202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.815227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.815340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.815366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.815446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.815472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.815554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.815582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.815666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.815695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.815809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.815835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.911 [2024-12-09 18:15:20.815827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.815858] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:57.911 [2024-12-09 18:15:20.815874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:57.911 [2024-12-09 18:15:20.815886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:57.911 [2024-12-09 18:15:20.815899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:57.911 [2024-12-09 18:15:20.815915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.815940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.816048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.816074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.816216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.816243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.816358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.816386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.816473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.816500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.816602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.816628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.816739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.816765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.816877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.816903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.816993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.817019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.817110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.817137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.817256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.817283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.817398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.817426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.817509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.817535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.817530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:57.911 [2024-12-09 18:15:20.817656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.817564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:57.911 [2024-12-09 18:15:20.817684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.817592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:25:57.911 [2024-12-09 18:15:20.817596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:57.911 [2024-12-09 18:15:20.817768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.817793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.817878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.817903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.818025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.818051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.818149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.818176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.818282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.818310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.818422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.818448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.818529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.818569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.818651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.818676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.818797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.818823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.818911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.818939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.911 [2024-12-09 18:15:20.819022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.911 [2024-12-09 18:15:20.819050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.911 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.819165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.819195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.819288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.819313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.819396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.819421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.819530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.819563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.819647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.819674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.819759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.819785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.819872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.819898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.819978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.820006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.820090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.820118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.820245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.820284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.820380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.820407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.820491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.820518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.820610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.820637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.820766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.820792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.820912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.820938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.821049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.821074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.821158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.821186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.821270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.821298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.821399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.821439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.821536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.821571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.821664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.821691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.821783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.821809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.821890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.821917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.822001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.822028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.822114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.822141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.822218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.822244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.822322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.822348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.822463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.822490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.822584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.822613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.822697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.822724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.822832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.822859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.822948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.822975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.823059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.823087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.823201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.823228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.823318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.823346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.823488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.823528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.823665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.823693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.823776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.823802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.823895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.912 [2024-12-09 18:15:20.823921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.912 qpair failed and we were unable to recover it. 00:25:57.912 [2024-12-09 18:15:20.824002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.824028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.824104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.824137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.824219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.824246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.824350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.824389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.824483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.824509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.824596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.824624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.824705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.824731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.824814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.824843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.824961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.824987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.825068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.825094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.825179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.825206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.825294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.825322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.825402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.825428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.825504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.825530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.825623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.825649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.825738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.825765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.825843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.825868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.825979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.826007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.826086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.826112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.826203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.826229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.826309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.826334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.826415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.826441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.826518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.826551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.826627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.826652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.826727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.826753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.826833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.826858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.826966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.826991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.827118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.827143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.827255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.827286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.827371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.827398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.827522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.827557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.827644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.827670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.827751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.827778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.827867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.827893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.913 [2024-12-09 18:15:20.828000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.913 [2024-12-09 18:15:20.828026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.913 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.828120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.828147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.828254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.828281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.828365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.828391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.828471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.828498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.828609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.828638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.828715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.828741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.828827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.828853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.828948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.828973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.829075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.829114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.829199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.829227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.829336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.829375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.829488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.829515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.829606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.829634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.829726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.829752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.829868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.829894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.829968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.829993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.830083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.830110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.830191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.830218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.830308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.830337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.830423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.830452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.830540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.830578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.830671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.830698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.830786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.830811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.830888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.830915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.831000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.831025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.831106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.831131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.831215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.831240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.831326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.831351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.831467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.831494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.831621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.831648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.831735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.831762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.831840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.831867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.831974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.832001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.832101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.832139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.832242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.832270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.832400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.832439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.832538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.832573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.832653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.832679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.832798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.914 [2024-12-09 18:15:20.832825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.914 qpair failed and we were unable to recover it. 00:25:57.914 [2024-12-09 18:15:20.832914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.832942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.833018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.833045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.833138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.833163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.833276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.833303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.833381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.833408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.833488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.833515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.833606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.833633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.833716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.833742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.833835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.833863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.833940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.833966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.834061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.834100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.834185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.834212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.834336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.834376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.834469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.834496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.834644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.834671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.834756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.834781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.834908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.834934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.835016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.835041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.835138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.835166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.835249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.835278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.835358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.835384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.835464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.835496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.835591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.835619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.835699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.835725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.835816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.835842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.835918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.835944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.836052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.836078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.836194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.836222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.836311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.836339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.836439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.836479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.836605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.836633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.836716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.836742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.836824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.836850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.836942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.836968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.837083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.837110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.837231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.837259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.837359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.837398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.837558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.837586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.837667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.837694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.915 qpair failed and we were unable to recover it. 00:25:57.915 [2024-12-09 18:15:20.837784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.915 [2024-12-09 18:15:20.837810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.837899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.837926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.838008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.838034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.838113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.838138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.838235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.838274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.838393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.838421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.838506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.838533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.838637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.838662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.838747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.838773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.838861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.838890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.839007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.839034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.839118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.839144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.839259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.839284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.839369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.839396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.839470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.839495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.839607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.839634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.839719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.839744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.839858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.839884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.839999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.840024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.840102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.840127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.840224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.840263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.840362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.840402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.840493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.840520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.840620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.840646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.840757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.840782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.840863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.840888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.840989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.841015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.841114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.841143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.841230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.841260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.841348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.841376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.841465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.841491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.841579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.841606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.841686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.841712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.841798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.841823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.841934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.841960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.842069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.842095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.842183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.842208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.842288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.842316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.842399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.842428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.916 [2024-12-09 18:15:20.842539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.916 [2024-12-09 18:15:20.842571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.916 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.842654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.842682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.842767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.842794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.842875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.842903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.842995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.843022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.843112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.843138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.843214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.843240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.843316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.843342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.843454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.843481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.843605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.843634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.843719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.843751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.843842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.843869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.843982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.844008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.844086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.844112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.844192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.844218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.844308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.844348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.844471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.844499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.844592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.844617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.844693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.844718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.844797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.844823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.844912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.844939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.845024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.845052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.845140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.845168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.845248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.845275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.845361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.845389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.845480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.845519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.845622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.845661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.845757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.845786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.845874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.845900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.845998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.846024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.846108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.846134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.846215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.846243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.846323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.846349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.846435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.846475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.846567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.846596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.917 qpair failed and we were unable to recover it. 00:25:57.917 [2024-12-09 18:15:20.846682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.917 [2024-12-09 18:15:20.846708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.846788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.846814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.846904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.846933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.847055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.847080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.847168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.847196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.847289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.847317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.847457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.847496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.847605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.847634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.847723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.847752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.847828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.847853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.847957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.847983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.848067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.848093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.848181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.848209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.848340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.848380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.848479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.848507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.848601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.848634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.848723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.848749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.848837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.848862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.848969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.848994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.849074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.849099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.849184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.849213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.849299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.849327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.849446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.849475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.849573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.849600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.849677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.849704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.849792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.849819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.849902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.849929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.850010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.850038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.850130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.850158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.850252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.850279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.850393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.850418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.850537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.850572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.850653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.850678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.850766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.850791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.850880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.850906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.850984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.851010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.851089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.851115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.851248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.851287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.851374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.851402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.918 [2024-12-09 18:15:20.851486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.918 [2024-12-09 18:15:20.851514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.918 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.851622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.851649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.851734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.851760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.851847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.851877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.851956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.851983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.852113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.852142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.852231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.852259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.852340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.852366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.852480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.852506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.852594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.852621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.852713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.852739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.852845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.852871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.852982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.853008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.853122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.853148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.853229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.853257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.853341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.853371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.853454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.853480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.853579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.853606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.853688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.853714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.853827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.853852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.853927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.853953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.854063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.854089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.854195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.854220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.854311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.854339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.854438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.854478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.854609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.854638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.854730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.854756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.854835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.854861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.854951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.854977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.855055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.855082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.855195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.855221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.855342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.855381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.855473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.855501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.855594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.855622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.855704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.855730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.855843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.855869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.855982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.856008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.856093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.856121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.856206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.856233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.856319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.919 [2024-12-09 18:15:20.856345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.919 qpair failed and we were unable to recover it. 00:25:57.919 [2024-12-09 18:15:20.856426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.856453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.856533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.856565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.856652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.856678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.856756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.856786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.856869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.856894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.856976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.857001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.857078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.857103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.857196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.857235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.857326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.857354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.857440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.857465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.857555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.857581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.857664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.857689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.857768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.857794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.857922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.857947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.858061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.858088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.858173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.858202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.858292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.858319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.858418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.858457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.858542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.858578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.858670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.858696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.858776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.858801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.858880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.858906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.858983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.859008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.859119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.859144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.859240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.859269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.859347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.859372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.859452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.859479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.859568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.859596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.859708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.859748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.859845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.859874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.859955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.859982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.860078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.860105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.860212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.860237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.860318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.860344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.860471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.860511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.860608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.860636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.860724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.860749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.860826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.860852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.860946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.860971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.920 qpair failed and we were unable to recover it. 00:25:57.920 [2024-12-09 18:15:20.861060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.920 [2024-12-09 18:15:20.861089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.861168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.861195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.861271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.861297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.861411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.861436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.861560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.861586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.861685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.861724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.861810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.861837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.861929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.861958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.862044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.862071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.862157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.862184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.862271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.862297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.862370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.862397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.862478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.862503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.862588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.862617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.862701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.862728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.862806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.862832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.862930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.862956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.863036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.863064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.863190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.863218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.863303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.863329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.863440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.863466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.863572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.863598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.863670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.863696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.863773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.863798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.863884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.863909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.863992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.864019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.864102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.864129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.864226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.864265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.864347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.864374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.864455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.864480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.864592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.864618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.864732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.864762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.864845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.864869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.864958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.864984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.865074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.865100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.865194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.865222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.865304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.865331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.865418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.865448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.865566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.865592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.921 [2024-12-09 18:15:20.865682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.921 [2024-12-09 18:15:20.865708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.921 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.865794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.865820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.865963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.865988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.866071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.866095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.866183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.866210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.866300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.866326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.866416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.866442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.866525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.866557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.866636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.866661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.866747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.866773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.866855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.866882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.866959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.866984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.867077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.867116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.867244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.867273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.867360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.867388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.867474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.867500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.867630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.867658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.867739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.867765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.867851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.867877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.868023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.868053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.868140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.868168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.868249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.868274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.868377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.868416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.868512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.868538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.868627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.868653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.868734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.868760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.868841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.868867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.868947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.868973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.869049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.869074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.869160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.869189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.869275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.869303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.869402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.869431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.869518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.869550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.869647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.869673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.869789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.869814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.922 qpair failed and we were unable to recover it. 00:25:57.922 [2024-12-09 18:15:20.869889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.922 [2024-12-09 18:15:20.869915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.870003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.870031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.870109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.870135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.870218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.870245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.870354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.870380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.870465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.870495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.870583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.870611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.870728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.870755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.870839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.870864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.870939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.870964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.871046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.871072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.871157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.871183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.871271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.871299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.871381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.871409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.871488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.871514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.871610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.871636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.871726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.871752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.871829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.871854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.871942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.871968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.872044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.872070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.872146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.872172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.872250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.872276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.872350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.872376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.872449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.872474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.872565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.872591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.872676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.872701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.872777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.872802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.872880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.872905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.872986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.873011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.873093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.873117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.873221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.873260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.873357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.873387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.873472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.873498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.873587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.873614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.873698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.873725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.873805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.873831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.873960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.873986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.874083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.874109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.874193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.874221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.874338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.923 [2024-12-09 18:15:20.874363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.923 qpair failed and we were unable to recover it. 00:25:57.923 [2024-12-09 18:15:20.874441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.874467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.874552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.874577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.874654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.874679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.874761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.874786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.874865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.874890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.874969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.874995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.875079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.875105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.875200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.875241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.875326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.875355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.875474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.875504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.875608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.875636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.875717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.875750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.875829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.875855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.875937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.875965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.876096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.876125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.876211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.876239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.876321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.876348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.876424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.876450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.876534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.876575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.876665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.876691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.876770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.876795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.876887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.876913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.876996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.877023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.877119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.877147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.877272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.877311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.877440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.877467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.877608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.877635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.877729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.877754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.877867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.877894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.877977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.878004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.878111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.878138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.878222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.878248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.878329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.878355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.878436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.878462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.878558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.878585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.878676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.878701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.878779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.878804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.878917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.878942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.879026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.924 [2024-12-09 18:15:20.879058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.924 qpair failed and we were unable to recover it. 00:25:57.924 [2024-12-09 18:15:20.879141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.879166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.879248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.879273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.879392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.879417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.879496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.879521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.879616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.879642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.879716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.879740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.879826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.879851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.879960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.879985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.880091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.880115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.880233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.880262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.880382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.880410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.880498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.880525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.880614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.880640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.880727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.880753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.880863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.880888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.880973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.880998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.881077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.881105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.881189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.881215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.881301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.881329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.881407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.881433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.881507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.881533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.881623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.881648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.881729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.881754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.881839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.881864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.881946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.881972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.882082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.882110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.882243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.882269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.882357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.882386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.882469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.882495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.882577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.882605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.882695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.882722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.882804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.882830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.882911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.882937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.883048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.883075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.883153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.883179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.883286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.883327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.883440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.883467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.883588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.883616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.883699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.883725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.925 [2024-12-09 18:15:20.883815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.925 [2024-12-09 18:15:20.883848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.925 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.883934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.883960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.884048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.884076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.884164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.884191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.884289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.884328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.884440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.884466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.884579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.884606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.884687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.884713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.884796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.884821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.884908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.884933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.885011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.885036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.885132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.885160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.885241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.885268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.885378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.885404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.885486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.885512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.885600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.885627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.885706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.885731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.885845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.885872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.885959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.885984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.886092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.886117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.886224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.886250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.886324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.886349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.886464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.886493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.886587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.886614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.886700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.886727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.886809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.886836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.886912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.886939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.887019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.887050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.887159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.887186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.887283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.887311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.887397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.887422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.887521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.887553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.887632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.887658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.887734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.887758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.887870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.887895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.926 [2024-12-09 18:15:20.887972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.926 [2024-12-09 18:15:20.887998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.926 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.888092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.888131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.888226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.888255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.888352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.888391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.888493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.888519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.888613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.888640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.888722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.888749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.888860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.888886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.889024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.889049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.889131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.889156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.889231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.889256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.889354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.889394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.889480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.889507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.889598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.889626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.889708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.889734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.889811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.889837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.889939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.889966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.890042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.890069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.890168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.890193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.890275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.890305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.890391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.890416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.890504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.890529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.890615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.890642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.890723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.890750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.890845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.890884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.890973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.891000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.891075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.891100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.891181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.891207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.891318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.891344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.891417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.891442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.891530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.891562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.891657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.891682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.891763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.891790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.891885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.891912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.892002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.892029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.892113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.892141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.892225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.892253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.892342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.892367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.892444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.892470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.892565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.927 [2024-12-09 18:15:20.892591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.927 qpair failed and we were unable to recover it. 00:25:57.927 [2024-12-09 18:15:20.892690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.892729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.892850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.892876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.892988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.893014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.893093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.893119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.893203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.893229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.893309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.893335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.893431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.893459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.893659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.893685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.893762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.893787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.893864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.893889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.893969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.893994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.894094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.894134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.894231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.894259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.894376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.894401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.894483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.894509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.894605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.894632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.894717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.894742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.894824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.894850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.894928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.894953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.895058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.895083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.895169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.895196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.895297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.895337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.895432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.895461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.895558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.895586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.895701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.895727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.895811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.895838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.895928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.895956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.896041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.896068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.896153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.896181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.896263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.896290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.896415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.896455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.896574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.896602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.896680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.896705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.896827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.896854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.896963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.896988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.897065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.897091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.897170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.897196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.897287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.897327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.897427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.928 [2024-12-09 18:15:20.897456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.928 qpair failed and we were unable to recover it. 00:25:57.928 [2024-12-09 18:15:20.897549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.897578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.897673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.897700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.897819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.897845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.897928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.897954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.898038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.898066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.898155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.898180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.898262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.898289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.898405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.898436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.898514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.898540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.898629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.898655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.898761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.898787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.898870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.898897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.899012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.899039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.899126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.899154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.899294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.899333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.899420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.899448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.899531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.899567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.899653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.899678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.899756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.899783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.899860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.899886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.899965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.899991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.900082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.900107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.900188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.900214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.900364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.900392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.900470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.900498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.900596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.900625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.900704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.900729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.900808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.900834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.900909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.900934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.901017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.901045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.901126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.901152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.901235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.901262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.901338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.901364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.901447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.901476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.901567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.901594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.901711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.901738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.901828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.901854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.901934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.901959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.902043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.902070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.929 qpair failed and we were unable to recover it. 00:25:57.929 [2024-12-09 18:15:20.902153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.929 [2024-12-09 18:15:20.902180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.902295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.902324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.902408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.902435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.902551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.902577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.902658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.902683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.902764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.902790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.902899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.902925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.903071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.903098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.903177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.903205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.903297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.903326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.903423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.903450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.903529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.903563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.903658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.903684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.903766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.903792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.903871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.903896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.903978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.904003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.904082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.904108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.904188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.904215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.904303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.904331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.904439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.904467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.904554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.904580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.904666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.904693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.904777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.904803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.904919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.904946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.905035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.905061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.905142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.905167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.905244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.905271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.905359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.905385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.905462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.905490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.905585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.905613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.905701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.905727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.905839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.905865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.905944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.905970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.906045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.906071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.906149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.906175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.906252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.906283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.906392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.906418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.906530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.906563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.906643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.906669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.906746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.930 [2024-12-09 18:15:20.906772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.930 qpair failed and we were unable to recover it. 00:25:57.930 [2024-12-09 18:15:20.906852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.931 [2024-12-09 18:15:20.906879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.931 qpair failed and we were unable to recover it. 00:25:57.931 [2024-12-09 18:15:20.906962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.931 [2024-12-09 18:15:20.906990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.931 qpair failed and we were unable to recover it. 00:25:57.931 [2024-12-09 18:15:20.907073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.931 [2024-12-09 18:15:20.907100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.931 qpair failed and we were unable to recover it. 00:25:57.931 [2024-12-09 18:15:20.907188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.931 [2024-12-09 18:15:20.907215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.931 qpair failed and we were unable to recover it. 00:25:57.931 [2024-12-09 18:15:20.907293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.931 [2024-12-09 18:15:20.907319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.931 qpair failed and we were unable to recover it. 00:25:57.931 [2024-12-09 18:15:20.907404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.931 [2024-12-09 18:15:20.907432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.931 qpair failed and we were unable to recover it. 00:25:57.931 [2024-12-09 18:15:20.907554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.931 [2024-12-09 18:15:20.907582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.931 qpair failed and we were unable to recover it. 00:25:57.931 [2024-12-09 18:15:20.907666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.931 [2024-12-09 18:15:20.907692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.931 qpair failed and we were unable to recover it. 00:25:57.931 [2024-12-09 18:15:20.907776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.931 [2024-12-09 18:15:20.907801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.931 qpair failed and we were unable to recover it. 00:25:57.931 [2024-12-09 18:15:20.907890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.931 [2024-12-09 18:15:20.907917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:57.931 qpair failed and we were unable to recover it. 00:25:57.931 [2024-12-09 18:15:20.907995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.931 [2024-12-09 18:15:20.908024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:57.931 qpair failed and we were unable to recover it. 00:25:57.931 [2024-12-09 18:15:20.908115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.931 [2024-12-09 18:15:20.908144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:57.931 qpair failed and we were unable to recover it. 00:25:57.931 [2024-12-09 18:15:20.908229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.931 [2024-12-09 18:15:20.908256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.931 qpair failed and we were unable to recover it. 00:25:57.931 [2024-12-09 18:15:20.908350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.931 [2024-12-09 18:15:20.908376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.931 qpair failed and we were unable to recover it. 00:25:57.931 [2024-12-09 18:15:20.908490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.931 [2024-12-09 18:15:20.908515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.931 qpair failed and we were unable to recover it. 00:25:57.931 [2024-12-09 18:15:20.908616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.931 [2024-12-09 18:15:20.908642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.931 qpair failed and we were unable to recover it. 00:25:57.931 [2024-12-09 18:15:20.908731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.931 [2024-12-09 18:15:20.908757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.931 qpair failed and we were unable to recover it. 00:25:57.931 [2024-12-09 18:15:20.908843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.931 [2024-12-09 18:15:20.908868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:57.931 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.909026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.909054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.909163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.909191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.909276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.909304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.909384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.909411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.909500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.909528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.909626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.909653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.909761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.909786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.909858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.909883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.909998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.910026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.910115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.910144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.910302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.910342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.910442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.910470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.910584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.910611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.910696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.910722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.910806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.910832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.910956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.910982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.911060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.911085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.911166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.911191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.911290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.911316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.911398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.911424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.911498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.911524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.911653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.911692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.911790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.911818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.911899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.911926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.912006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.912033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.912171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.912197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.912271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.912297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.912396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.912424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.912500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.912526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.912623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.912652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.912740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.912768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.912892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.912918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.913030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.913057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.913139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.913167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.913263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.913302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.913397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.913425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.913510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.913536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.913633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.913659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-12-09 18:15:20.913744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-12-09 18:15:20.913771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.913859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.913885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.913995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.914021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.914111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.914137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.914219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.914244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.914327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.914354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.914436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.914468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.914581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.914609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.914694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.914720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.914823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.914862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.914957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.914985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.915074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.915102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.915191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.915218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.915299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.915326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.915407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.915433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.915511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.915538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.915631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.915657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.915741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.915766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.915846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.915872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.915944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.915970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.916062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.916088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.916172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.916197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.916274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.916300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.916382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.916409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.916495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.916524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.916631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.916659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.916745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.916771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.916858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.916885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.916966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.916992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.917069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.917095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.917210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.917237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.917320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.917346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.917437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.917476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.917606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.917634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.917718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.917744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.917828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.917855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.917938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.917965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.918047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.918074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.918167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.918193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.918306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.918335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.918445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.918472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.918562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.918591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.918671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.918696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.918777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.918803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.918879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.918905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.918984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.919009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.919098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.919128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.919206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.919234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.919315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.919342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.919425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.919451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.919537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.919577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.919707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.919733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.919846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.919872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.919955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.919982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.920092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.920119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.920230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.920256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.920335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.920361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.920444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.920473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.920563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.920590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.920679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.920706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.920787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.920813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.920952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.920978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.921088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.921114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.921200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.921225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.921305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.921331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.921445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.921473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.921559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.921587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.921663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.921690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.921769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.921795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.921873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.921900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.921977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.922004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.922113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.922139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.922220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.922246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-12-09 18:15:20.922328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-12-09 18:15:20.922355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.922437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.922464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.922559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.922587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.922673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.922700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.922783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.922810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.922927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.922953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.923034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.923060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.923142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.923170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.923251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.923277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.923358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.923388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.923471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.923497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.923622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.923650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.923758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.923784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.923867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.923898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.924041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.924067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.924175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.924200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.924281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.924310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.924416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.924443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.924525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.924558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.924645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.924671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.924752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.924777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.924856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.924883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.924995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.925022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.925101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.925128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.925211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.925238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.925323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.925351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.925446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.925474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.925558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.925585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.925660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.925686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.925769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.925794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.925874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.925899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.925971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.925996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.926072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.926099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.926212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.926239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.926317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.926343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.926432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.926459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.926539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.926571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.926648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.926675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.926751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.926776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.926884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.926911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.926995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.927024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.927102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.927129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.927221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.927249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.927334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.927360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.927439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.927465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.927581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.927607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.927715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.927741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.927823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.927848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.927929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.927955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.928038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.928065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.928143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.928168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.928251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.928277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.928354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.928379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.928497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.928523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.928616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.928643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.928722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.928749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.928823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.928849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.928923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.928948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.929026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.929052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.929128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.929154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.929230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.929258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.929339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.929364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.929480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.929509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.929607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.929634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.929715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.929741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.929820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-12-09 18:15:20.929846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-12-09 18:15:20.929924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.929952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.930055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.930082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.930164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.930191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.930267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.930293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.930373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.930398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.930506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.930531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.930627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.930654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.930738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.930765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.930884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.930910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.930993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.931019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.931098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.931124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.931250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.931277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.931364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.931391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.931469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.931495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.931576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.931608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.931732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.931759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.931830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.931856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.931941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.931968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.932053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.932080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.932208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.932234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.932310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.932336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.932418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.932445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.932530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.932564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.932641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.932668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.932752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.932778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.932919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.932946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.933023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.933049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.933130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.933157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.933242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.933267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.933346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.933373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.933450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.933476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.933555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.933581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.933661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.933686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.933795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.933820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.933903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.933930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.934007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.934033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.934119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.934146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.934226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.934254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.934388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.934427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.934517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.934553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.934645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.934671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.934756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.934784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.934922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.934949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.935055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.935081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.935163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.935189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.935295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.935321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.935430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.935456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.935537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.935570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.935651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.935676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.935761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.935787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.935865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.935890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.935994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.936020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.936100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.936126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.936210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.936236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.936312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.936344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.936440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.936466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.936551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.936578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.936664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.936691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.936774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.936800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.936885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.936911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.936998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.937024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.937106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.937132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.937206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.937232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.937309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.937334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.937437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.937463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.937550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.937577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.937689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.937716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.937795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.937822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.937910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.937936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.938055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.938081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.938160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.938187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.938271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.938297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-12-09 18:15:20.938377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-12-09 18:15:20.938402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.938476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.938503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.938594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.938621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.938698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.938724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.938812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.938838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.938920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.938946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.939058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.939083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.939170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.939197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.939283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.939309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.939433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.939472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.939576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.939616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.939716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.939744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.939890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.939916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.939992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.940017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.940095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.940120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.940193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.940220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.940326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.940353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.940442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.940469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.940558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.940585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.940673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.940699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.940787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.940813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.940888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.940914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.940990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.941016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.941130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.941157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.941265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.941290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.941395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.941433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.941532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.941570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.941655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.941681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.941792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.941818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.941908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.941934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.942050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.942076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.942167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.942194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.942275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.942301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.942444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.942472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.942563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.942589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.942669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.942695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.942814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.942840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.942948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.942976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.943062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.943088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.943206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.943233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.943313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.943340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.943446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.943472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.943559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.943586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.943674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.943701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.943783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.943809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.943918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.943944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.944026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.944052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.944142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.944168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.944244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.944270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.944358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.944403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.944502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.944529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.944621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.944647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.944729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.944755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.944850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.944876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.944954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.944978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.945053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.945077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.945158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.945182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.945286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.945311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.945394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.945422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.945538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.945575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.945655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.945681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.945793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.945819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.945912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.945939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.946070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.946109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.946194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.946221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.946299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.946325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.946437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.946463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.946568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.946594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.946673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.946699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.946785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.946811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.946903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.946927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.947038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.947063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.947145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.947172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.947255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.947281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-12-09 18:15:20.947395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-12-09 18:15:20.947422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.947503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.947529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.947617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.947648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.947736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.947762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.947849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.947875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.947950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.947976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.948066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.948091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.948167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.948191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.948314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.948353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.948445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.948473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.948556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.948583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.948659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.948685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.948770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.948796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.948873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.948900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.948982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.949008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.949123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.949148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.949228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.949256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.949341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.949367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.949457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.949482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.949593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.949620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.949705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.949731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.949818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.949844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.949922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.949948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.950060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.950084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.950166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.950192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.950297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.950322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.950408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.950433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.950516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.950540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.950662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.950687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.950769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.950798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.950891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.950926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.951013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.951041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.951153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.951181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.951262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.951289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.951379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.951407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.951491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.951517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.951607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.951634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.951725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.951751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.951828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.951853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.951967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.951993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.952272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.952297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.952372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.952397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.952472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.952497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.952603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.952629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.952703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.952727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.952805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.952829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.952928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.952953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.208 [2024-12-09 18:15:20.953063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.953087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:58.208 [2024-12-09 18:15:20.953167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.953193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.953292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.953318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.953391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.953416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:58.208 [2024-12-09 18:15:20.953523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.953562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.953642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.953667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.953743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.953767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.953844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.953869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.953945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.953970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.954054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.954078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.954190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.954215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.954331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.954356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.954436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.954460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.954543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.954575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.954667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.954692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.954804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.954828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.954904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.954929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.955005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.955030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.955107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.955132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.955214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.955247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-12-09 18:15:20.955357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-12-09 18:15:20.955385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.955483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.955518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.955647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.955676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.955758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.955785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.955863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.955890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.955977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.956004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.956131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.956170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.956265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.956294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.956389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.956416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.956497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.956523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.956616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.956642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.956757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.956783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.956901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.956927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.957018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.957045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.957127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.957153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.957267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.957296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.957410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.957440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.957529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.957563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.957653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.957679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.957790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.957824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.957935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.957962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.958048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.958075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.958184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.958210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.958316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.958356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.958449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.958477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.958555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.958581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.958665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.958692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.958806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.958836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.958918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.958945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.959021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.959048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.959135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.959168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.959255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.959283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.959376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.959402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.959516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.959543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.959636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.959662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.959750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.959777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.959854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.959880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.959959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.959988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.960113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.960144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.960229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.960255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.960338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.960364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.960455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.960480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.960567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.960609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.960698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.960726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.960815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.960849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.960924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.960950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.961036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.961062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.961151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.961179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.961255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.961286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.961404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.961431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.961513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.961540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.961642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.961668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.961750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.961776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.961905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.961935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.962020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.962048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.962139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.962165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.962248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.962275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.962356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.962384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.962462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.962489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.962605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.962634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.962727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.962766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.962893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.962921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.963001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.963027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.963105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.963131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.963208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.963235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.963319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.963348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.963439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.963467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.963555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.963597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.963688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.963713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.963804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.963832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.963921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.963948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.964035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.964061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.964143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.964169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.964278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.964306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-12-09 18:15:20.964389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-12-09 18:15:20.964417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.964529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.964563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.964651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.964677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.964757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.964783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.964857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.964883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.964989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.965015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.965128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.965154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.965246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.965274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.965360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.965386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.965471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.965500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.965624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.965652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.965737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.965764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.965845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.965871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.965953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.965979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.966100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.966128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.966211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.966237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.966325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.966364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.966451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.966478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.966560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.966587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.966672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.966697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.966780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.966806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.966916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.966942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.967059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.967085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.967178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.967205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.967282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.967308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.967391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.967418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.967537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.967666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.967757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.967783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.967860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.967886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.967975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.968001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.968083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.968108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.968188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.968214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.968285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.968311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.968401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.968446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.968536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.968569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.968662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.968688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.968773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.968805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.968896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.968922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.969004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.969029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.969110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.969137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.969222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.969247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.969359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.969385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.969459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.969485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.969607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.969633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.969739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.969765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.969856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.969882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.969957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.969982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.970097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.970123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.970200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.970226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.970299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.970325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.970401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.970427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.970527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.970571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.970657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.970684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.970771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.970797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.970878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.970904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.970991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.971016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.971093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.971118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.971196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.971221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.971297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.971323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.971407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.971433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.971526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.971561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.971677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.971703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.971780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.971806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.971888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.971914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.972000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.972025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.972132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.972157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.972244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.972270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.972357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.972384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.972478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-12-09 18:15:20.972503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-12-09 18:15:20.972622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.972649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.972732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.972758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.972845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.972871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.973009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.973035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.973151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.973182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.973260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.973286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.973369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.973395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.973466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.973492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.973587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.973613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.973700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.973726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.973811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.973836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.973915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.973941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.974030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.974056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.974136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.974161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.974236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.974262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.974347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.974373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.974480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.974505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.974595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.974621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.974706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.974731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.974843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.974870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.974954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.974980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.975059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.975085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.975176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.975216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.975310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.975338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.975444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.975472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.975586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.975614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.975696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.975722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.975805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.975833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.975980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.976006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.976093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.976118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.976203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.976228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.976310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.976338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.976466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.976492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.976583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.976610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.976696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.976723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.976802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.976829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.976910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.976937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.977038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.977065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.977145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.977171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.977248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.977274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.977354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.977379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.977485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.977511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.977605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.977645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.977740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.977768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.977847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.977880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.211 [2024-12-09 18:15:20.977973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.978002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.978088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.978117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:58.211 [2024-12-09 18:15:20.978203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.978229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.978312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.211 [2024-12-09 18:15:20.978339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.978424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.978451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 18:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:58.211 [2024-12-09 18:15:20.978531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.978564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.978648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.978675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.978763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.978789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.978868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.978894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.978975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.979002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.979084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.979110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.979192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.979218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.979298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.979323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.979412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.979441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.979535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.979567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.979656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.979681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.979761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.979787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.979863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.979890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.979977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.980004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.980093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.980120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.980200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.980226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.980334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.980360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.980436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.980461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.980538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.980577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.980664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.980692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.980773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-12-09 18:15:20.980799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-12-09 18:15:20.980886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.980915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.980993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.981019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.981099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.981126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.981234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.981260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.981341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.981369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.981450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.981478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.981572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.981599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.981683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.981710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.981801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.981828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.981935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.981961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.982055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.982081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.982163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.982200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.982283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.982308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.982390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.982415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.982500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.982526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.982606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.982632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.982707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.982733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.982817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.982843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.982922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.982947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.983034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.983069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.983154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.983180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.983295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.983320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.983406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.983432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.983514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.983539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.983637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.983664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.983760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.983791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.983892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.983920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.984008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.984034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.984146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.984173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.984254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.984280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.984376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.984420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.984554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.984583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.984668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.984695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.984781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.984807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.984885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.984910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.985020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.985045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.985127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.985153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.985264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.985289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.985383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.985412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.985499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.985527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.985626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.985653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.985741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.985767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.985859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.985886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.985976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.986003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.986079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.986106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.986235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.986261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.986348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.986375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.986452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.986478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.986565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.986591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.986672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.986697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.986813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.986841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.986935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.986967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.987159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.987186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.987271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.987297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.987385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.987411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.987490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.987515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.987713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.987740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.987831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.987858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.987946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.987973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.988080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.988106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.988194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.988223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.988312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.988338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.988434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.988474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.988582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.988621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.988713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.988739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.988829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.988855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.988939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.988964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.989047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.989072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-12-09 18:15:20.989152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-12-09 18:15:20.989177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.989259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.989284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.989409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.989446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.989541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.989580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.989668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.989695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.989815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.989841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.989920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.989946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.990035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.990065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.990145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.990171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.990286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.990313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.990408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.990440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.990532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.990564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.990655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.990682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.990873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.990899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.990978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.991005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.991094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.991121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.991204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.991235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.991427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.991453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.991534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.991565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.991662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.991688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.991772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.991799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.991884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.991910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.992004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.992030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.992143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.992169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.992365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.992391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.992477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.992504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.992596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.992623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.992719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.992745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.992817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.992843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.992922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.992948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.993136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.993162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.993281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.993309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.993403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.993431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.993528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.993580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.993693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.993720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.993811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.993837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.993954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.993980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.994070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.994097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.994256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.994284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.994367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.994396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.994504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.994530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.994630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.994656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.994755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.994780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.994870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.994897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.995012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.995038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.995113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.995139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.995231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.995258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.995366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.995393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.995482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.995508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.995610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.995637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.995711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.995743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.995821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.995847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.995926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.995951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.996029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.996055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.996134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.996160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.996248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.996274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.996354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.996382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.996487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.996513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.996629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.996675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.996791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.996820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.996909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.996935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.997018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.997043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.997132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.997163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.997245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.997270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.997389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.997414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.997495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.997525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.997667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.997706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.997792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.997821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.997937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.997963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.998044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.998070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.998157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.998183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.998298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.998324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.998413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.998439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-12-09 18:15:20.998632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-12-09 18:15:20.998658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:20.998846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:20.998872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:20.998956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:20.998982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:20.999063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:20.999091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:20.999192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:20.999219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:20.999305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:20.999331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:20.999436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:20.999476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:20.999593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:20.999622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:20.999714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:20.999740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:20.999820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:20.999846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:20.999927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:20.999952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.000043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.000069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.000149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.000174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.000251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.000276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.000386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.000410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.000494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.000522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.000647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.000689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.000797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.000832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.000914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.000940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.001023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.001049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.001133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.001160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.001247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.001273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.001364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.001392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.001478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.001504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.001704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.001731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.001827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.001854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.001929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.001955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.002037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.002063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.002146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.002172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.002252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.002278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.002359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.002385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.002477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.002505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.002591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.002618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.002716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.002741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.002822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.002848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.002935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.002961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.003043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.003068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.003148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.003173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.003304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.003350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.003443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.003476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.003565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.003592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.003669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.003696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.003780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.003807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.003895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.003921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.004046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.004074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.004158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.004184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.004264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.004290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.004404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.004430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.004517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.004543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.004644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.004670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.004782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.004809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.004887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.004913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.005032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.005062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.005150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.005177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.005256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.005282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.005366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.005392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.005487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.005514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.005622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.005653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.005740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.005766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.005849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.005876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.005975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.006003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.006079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.006105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.006191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.006221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.006302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.006329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.006416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.006443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.006523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.006556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.006654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.006680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.006769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.006796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.006878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.006905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.007024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.007050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.007133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.007159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.007247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.007273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.007387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.007413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.007493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.007518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-12-09 18:15:21.007601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-12-09 18:15:21.007628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.007727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.007753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.007842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.007868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.007943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.007969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.008055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.008081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.008162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.008188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.008270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.008296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.008388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.008414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.008501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.008527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.008640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.008679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.008774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.008802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.008926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.008954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.009036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.009062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.009153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.009179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.009265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.009291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.009371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.009396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.009483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.009508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.009600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.009626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.009729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.009757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.009837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.009863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.009947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.009974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.010060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.010086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.010174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.010200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.010285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.010316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.010393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.010419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.010504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.010530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.010635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.010662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.010749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.010775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.010854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.010880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.010966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.010992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.011106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.011134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.011214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.011240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.011319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.011345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.011429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.011455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.011540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.011574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.011661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.011687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.011775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.011803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.011931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.011958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.012043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.012069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.012142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.012169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.012251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.012278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.012365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.012395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.012482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.012508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.012638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.012687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.012787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.012814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.012897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.012922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.013011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.013035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.013126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.013152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.013227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.013251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.013333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.013360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.013460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.013495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.013618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.013657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.013753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.013780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.013863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.013890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.013968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.013993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.014079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.014105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.014180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.014206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.014287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.014312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.014400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.014426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.014506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.014530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.014638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.014672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.014753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.014777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.014865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.014890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.014966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.014991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.015073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.015099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.015196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.015222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.015296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.015321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.015397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.015423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.015505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-12-09 18:15:21.015531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-12-09 18:15:21.015636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.015662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.015777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.015803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.015895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.015924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.016013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.016039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.016115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.016142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.016232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.016258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.016376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.016402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.016487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.016514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.016624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.016652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.016734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.016760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.016848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.016873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.016956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.016981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.017066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.017093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.017192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.017220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.017351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.017378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.017463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.017487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.017575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.017610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.017702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.017727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.017812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.017836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.017920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.017948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.018045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.018073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.018159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.018190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.018275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.018302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.018392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.018418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.018505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.018531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.018634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.018668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.018754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.018778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.018888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.018913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.018993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.019018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.019109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.019134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.019213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.019237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.019366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.019393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.019478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.019505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.019618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.019644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.019739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.019766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.019859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.019885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.019998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.020023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.020137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.020163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.020260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.020287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.020376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.020403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 Malloc0 00:25:58.216 [2024-12-09 18:15:21.020535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.020577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.020664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.020691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.020772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.020798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.020915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.020942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 18:15:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.021023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.021049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 18:15:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:58.216 [2024-12-09 18:15:21.021128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.021155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 18:15:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.216 [2024-12-09 18:15:21.021242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.021270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 18:15:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:58.216 [2024-12-09 18:15:21.021368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.021394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.021474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.021499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.021595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.021623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.021724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.021750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.021841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.021867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.021950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.021976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.022058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.022084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.022186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.022220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.022316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.022344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.022429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.022455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.022567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.022598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.022682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.022708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.022790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.022817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.022908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.022935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.023035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.023074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.023164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.023190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.023299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.023324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.023402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.023426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.023509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.023533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.023628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.023655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.023735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.023762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.023855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.023881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.023963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.023989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.024069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.024095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-12-09 18:15:21.024177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-12-09 18:15:21.024204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.024204] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.217 [2024-12-09 18:15:21.024290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.024316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.024410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.024438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.024520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.024553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.024638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.024664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.024759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.024785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.024868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.024894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.024980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.025007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.025086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.025113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.025205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.025229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.025321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.025346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.025425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.025450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.025536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.025566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.025658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.025685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.025781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.025809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.025897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.025929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.026037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.026063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.026151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.026177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.026291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.026317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.026399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.026424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.026510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.026536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.026621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.026647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.026729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.026755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.026850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.026878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.026963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.026989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.027077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.027103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.027191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.027217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.027297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.027323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.027427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.027467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.027566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.027594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.027678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.027703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.027786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.027811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.027893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.027918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.028006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.028032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.028117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.028145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.028241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.028269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.028358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.028384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.028473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.028500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.028603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.028629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.028717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.028742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.028823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.028850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.028941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.028967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.029063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.029100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.029207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.029233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.029325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.029350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.029433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.029458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.029571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.029598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.029684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.029709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.029788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.029813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.029901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.029926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.030038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.030063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.030147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.030175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.030269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.030296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.030390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.030418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.030505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.030532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.030621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.030647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.030737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.030762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.030847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.030872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.030952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.030977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.031060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.031087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.031167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.031192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.031273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.031300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.031380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.031406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.031490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.031517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.031608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.031635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.031729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.031756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.031835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.031860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.031946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.031971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.032054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.032079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-12-09 18:15:21.032171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-12-09 18:15:21.032198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.032289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.032315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.032403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.032432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.218 18:15:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.032521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.032553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 18:15:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:58.218 [2024-12-09 18:15:21.032647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.032673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 18:15:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.218 [2024-12-09 18:15:21.032758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.032783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 18:15:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:58.218 [2024-12-09 18:15:21.032859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.032884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.032981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.033006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.033091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.033119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.033200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.033226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.033306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.033332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.033418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.033444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.033533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.033567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.033664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.033690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.033789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.033814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.033899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.033927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.034013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.034039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.034127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.034154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.034262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.034289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.034371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.034398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.034481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.034507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.034598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.034625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.034710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.034736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.034822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.034849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.034929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.034955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.035044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.035070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.035186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.035211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.035291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.035318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.035399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.035425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.035505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.035530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.035615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.035639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.035719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.035743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.035823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.035851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.035937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.035964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.036043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.036070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.036159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.036186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.036274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.036302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.036388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.036415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.036498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.036529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.036616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.036642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.036733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.036758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.036857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.036881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.036963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.036989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.037067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.037092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.037170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.037197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.037288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.037316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.037403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.037429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.037516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.037542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.037634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.037660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.037776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.037802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.037889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.037915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.038006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.038031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.038139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.038164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.038246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.038271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.038366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.038406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.038506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.038559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff04000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.038671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.038698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.038784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.038811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.038893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.038919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.039004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.039031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.039110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.039135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.039279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.039317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.039406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.039434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.039516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.039552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.039639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.039665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.039753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.039786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.039870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.039896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.040005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.040033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.040161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.040187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.040268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.040293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-12-09 18:15:21.040378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.040402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 18:15:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.218 [2024-12-09 18:15:21.040506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.040530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 18:15:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:58.218 [2024-12-09 18:15:21.040628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-12-09 18:15:21.040656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.040753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 18:15:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.219 [2024-12-09 18:15:21.040781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.040873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 18:15:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:58.219 [2024-12-09 18:15:21.040901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.040984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.041011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.041092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.041117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.041251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.041277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.041362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.041388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.041479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.041507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.041598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.041625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.041706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.041733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.041819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.041845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.041928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.041954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.042032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.042060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.042139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.042163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.042248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.042273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.042351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.042375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.042457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.042484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.042578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.042605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.042695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.042723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.042817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.042843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.042929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.042955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.043068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.043095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.043181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.043208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.043285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.043310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.043420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.043445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.043532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.043563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.043648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.043673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.043752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.043777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.043862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.043890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.043973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.043999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.044117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.044144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.044235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.044262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.044349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.044375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.044460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.044485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.044620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.044647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.044739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.044765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.044857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.044883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.044970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.044996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.045079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.045105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.045183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.045209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.045292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.045320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.045416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.045441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.045541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.045574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.045661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.045687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.045773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.045799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.045920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.045946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.046040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.046066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.046152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.046179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.046266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.046293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.046381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.046407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.046513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.046539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.046626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.046651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.046735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.046761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.046842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.046866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.046940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.046967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.047075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.047101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.047213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.047239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.047353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.047378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.047463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.047493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.047580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.047608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.047701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.047727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.047805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.047834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.047912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.047938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.048020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.048047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.048139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.048166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.048260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.048285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.048371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.048396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.048477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 18:15:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.219 [2024-12-09 18:15:21.048501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.048588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.048613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 18:15:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:58.219 [2024-12-09 18:15:21.048697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.048723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 18:15:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.219 [2024-12-09 18:15:21.048813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.048842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 18:15:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:58.219 [2024-12-09 18:15:21.048918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-12-09 18:15:21.048944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-12-09 18:15:21.049016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.049041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.049130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.049157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.049248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.049274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.049357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.049384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.049470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.049497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.049579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.049605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.049692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.049719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.049802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.049830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.049917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.049942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.050025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.050051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.050143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.050169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.050273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.050302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.050393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.050420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.050502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.050528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.050619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.050646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.050737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.050763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.050840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.050865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.050946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.050972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.051053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.051080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.051170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.051198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.051275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.051305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.051391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.051417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.051499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.051525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.051612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.051640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.051731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.051758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.051846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.051873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.051955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.051981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efef8000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.052069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.052096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efefc000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.052212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.052239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.052320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-12-09 18:15:21.052345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20aefa0 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.052499] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.220 [2024-12-09 18:15:21.055032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.220 [2024-12-09 18:15:21.055182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.220 [2024-12-09 18:15:21.055210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.220 [2024-12-09 18:15:21.055226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.220 [2024-12-09 18:15:21.055237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.220 [2024-12-09 18:15:21.055273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 18:15:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.220 18:15:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:58.220 18:15:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.220 18:15:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:58.220 18:15:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.220 18:15:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1582818 00:25:58.220 [2024-12-09 18:15:21.064891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.220 [2024-12-09 18:15:21.064998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.220 [2024-12-09 18:15:21.065026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.220 [2024-12-09 18:15:21.065041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.220 [2024-12-09 18:15:21.065059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.220 [2024-12-09 18:15:21.065089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.075030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.220 [2024-12-09 18:15:21.075126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.220 [2024-12-09 18:15:21.075161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.220 [2024-12-09 18:15:21.075186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.220 [2024-12-09 18:15:21.075207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.220 [2024-12-09 18:15:21.075247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.084889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.220 [2024-12-09 18:15:21.084994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.220 [2024-12-09 18:15:21.085029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.220 [2024-12-09 18:15:21.085054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.220 [2024-12-09 18:15:21.085074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.220 [2024-12-09 18:15:21.085115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.094823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.220 [2024-12-09 18:15:21.094915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.220 [2024-12-09 18:15:21.094943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.220 [2024-12-09 18:15:21.094957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.220 [2024-12-09 18:15:21.094969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.220 [2024-12-09 18:15:21.094999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.104854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.220 [2024-12-09 18:15:21.104954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.220 [2024-12-09 18:15:21.104990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.220 [2024-12-09 18:15:21.105017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.220 [2024-12-09 18:15:21.105039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.220 [2024-12-09 18:15:21.105080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.114938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.220 [2024-12-09 18:15:21.115039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.220 [2024-12-09 18:15:21.115075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.220 [2024-12-09 18:15:21.115101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.220 [2024-12-09 18:15:21.115121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.220 [2024-12-09 18:15:21.115163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.124913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.220 [2024-12-09 18:15:21.125017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.220 [2024-12-09 18:15:21.125052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.220 [2024-12-09 18:15:21.125077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.220 [2024-12-09 18:15:21.125098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.220 [2024-12-09 18:15:21.125140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.134967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.220 [2024-12-09 18:15:21.135055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.220 [2024-12-09 18:15:21.135084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.220 [2024-12-09 18:15:21.135104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.220 [2024-12-09 18:15:21.135124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.220 [2024-12-09 18:15:21.135166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.144977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.220 [2024-12-09 18:15:21.145070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.220 [2024-12-09 18:15:21.145098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.220 [2024-12-09 18:15:21.145113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.220 [2024-12-09 18:15:21.145125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.220 [2024-12-09 18:15:21.145156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.155043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.220 [2024-12-09 18:15:21.155157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.220 [2024-12-09 18:15:21.155190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.220 [2024-12-09 18:15:21.155205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.220 [2024-12-09 18:15:21.155218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.220 [2024-12-09 18:15:21.155248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.165066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.220 [2024-12-09 18:15:21.165164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.220 [2024-12-09 18:15:21.165199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.220 [2024-12-09 18:15:21.165223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.220 [2024-12-09 18:15:21.165245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.220 [2024-12-09 18:15:21.165287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-12-09 18:15:21.175154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.220 [2024-12-09 18:15:21.175296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.220 [2024-12-09 18:15:21.175337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.220 [2024-12-09 18:15:21.175362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.220 [2024-12-09 18:15:21.175383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.221 [2024-12-09 18:15:21.175424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-12-09 18:15:21.185075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.221 [2024-12-09 18:15:21.185171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.221 [2024-12-09 18:15:21.185206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.221 [2024-12-09 18:15:21.185231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.221 [2024-12-09 18:15:21.185253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.221 [2024-12-09 18:15:21.185295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-12-09 18:15:21.195090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.221 [2024-12-09 18:15:21.195178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.221 [2024-12-09 18:15:21.195205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.221 [2024-12-09 18:15:21.195220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.221 [2024-12-09 18:15:21.195241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.221 [2024-12-09 18:15:21.195271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-12-09 18:15:21.205256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.221 [2024-12-09 18:15:21.205381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.221 [2024-12-09 18:15:21.205407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.221 [2024-12-09 18:15:21.205421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.221 [2024-12-09 18:15:21.205433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.221 [2024-12-09 18:15:21.205461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-12-09 18:15:21.215178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.221 [2024-12-09 18:15:21.215262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.221 [2024-12-09 18:15:21.215285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.221 [2024-12-09 18:15:21.215299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.221 [2024-12-09 18:15:21.215316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.221 [2024-12-09 18:15:21.215356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-12-09 18:15:21.225198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.221 [2024-12-09 18:15:21.225288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.221 [2024-12-09 18:15:21.225315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.221 [2024-12-09 18:15:21.225329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.221 [2024-12-09 18:15:21.225341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.221 [2024-12-09 18:15:21.225370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.479 [2024-12-09 18:15:21.235192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.479 [2024-12-09 18:15:21.235278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.479 [2024-12-09 18:15:21.235305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.479 [2024-12-09 18:15:21.235319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.479 [2024-12-09 18:15:21.235332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.479 [2024-12-09 18:15:21.235360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.479 qpair failed and we were unable to recover it. 00:25:58.479 [2024-12-09 18:15:21.245276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.479 [2024-12-09 18:15:21.245367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.479 [2024-12-09 18:15:21.245393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.479 [2024-12-09 18:15:21.245407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.479 [2024-12-09 18:15:21.245419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.479 [2024-12-09 18:15:21.245447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.479 qpair failed and we were unable to recover it. 00:25:58.479 [2024-12-09 18:15:21.255288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.480 [2024-12-09 18:15:21.255376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.480 [2024-12-09 18:15:21.255401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.480 [2024-12-09 18:15:21.255415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.480 [2024-12-09 18:15:21.255427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.480 [2024-12-09 18:15:21.255456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.480 qpair failed and we were unable to recover it. 00:25:58.480 [2024-12-09 18:15:21.265340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.480 [2024-12-09 18:15:21.265430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.480 [2024-12-09 18:15:21.265456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.480 [2024-12-09 18:15:21.265470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.480 [2024-12-09 18:15:21.265482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.480 [2024-12-09 18:15:21.265511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.480 qpair failed and we were unable to recover it. 00:25:58.480 [2024-12-09 18:15:21.275319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.480 [2024-12-09 18:15:21.275419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.480 [2024-12-09 18:15:21.275445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.480 [2024-12-09 18:15:21.275458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.480 [2024-12-09 18:15:21.275471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.480 [2024-12-09 18:15:21.275498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.480 qpair failed and we were unable to recover it. 00:25:58.480 [2024-12-09 18:15:21.285399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.480 [2024-12-09 18:15:21.285489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.480 [2024-12-09 18:15:21.285520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.480 [2024-12-09 18:15:21.285535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.480 [2024-12-09 18:15:21.285558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.480 [2024-12-09 18:15:21.285589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.480 qpair failed and we were unable to recover it. 00:25:58.480 [2024-12-09 18:15:21.295415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.480 [2024-12-09 18:15:21.295551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.480 [2024-12-09 18:15:21.295576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.480 [2024-12-09 18:15:21.295591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.480 [2024-12-09 18:15:21.295603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.480 [2024-12-09 18:15:21.295630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.480 qpair failed and we were unable to recover it. 00:25:58.480 [2024-12-09 18:15:21.305416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.480 [2024-12-09 18:15:21.305501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.480 [2024-12-09 18:15:21.305525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.480 [2024-12-09 18:15:21.305539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.480 [2024-12-09 18:15:21.305560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.480 [2024-12-09 18:15:21.305589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.480 qpair failed and we were unable to recover it. 00:25:58.480 [2024-12-09 18:15:21.315439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.480 [2024-12-09 18:15:21.315525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.480 [2024-12-09 18:15:21.315557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.480 [2024-12-09 18:15:21.315572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.480 [2024-12-09 18:15:21.315584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.480 [2024-12-09 18:15:21.315612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.480 qpair failed and we were unable to recover it. 00:25:58.480 [2024-12-09 18:15:21.325502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.480 [2024-12-09 18:15:21.325611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.480 [2024-12-09 18:15:21.325636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.480 [2024-12-09 18:15:21.325650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.480 [2024-12-09 18:15:21.325667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.480 [2024-12-09 18:15:21.325696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.480 qpair failed and we were unable to recover it. 00:25:58.480 [2024-12-09 18:15:21.335593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.480 [2024-12-09 18:15:21.335680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.480 [2024-12-09 18:15:21.335705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.480 [2024-12-09 18:15:21.335719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.480 [2024-12-09 18:15:21.335731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.480 [2024-12-09 18:15:21.335759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.480 qpair failed and we were unable to recover it. 00:25:58.480 [2024-12-09 18:15:21.345538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.480 [2024-12-09 18:15:21.345635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.480 [2024-12-09 18:15:21.345660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.480 [2024-12-09 18:15:21.345674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.480 [2024-12-09 18:15:21.345686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.480 [2024-12-09 18:15:21.345714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.480 qpair failed and we were unable to recover it. 00:25:58.480 [2024-12-09 18:15:21.355587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.480 [2024-12-09 18:15:21.355672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.480 [2024-12-09 18:15:21.355696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.480 [2024-12-09 18:15:21.355710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.480 [2024-12-09 18:15:21.355723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.480 [2024-12-09 18:15:21.355750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.480 qpair failed and we were unable to recover it. 00:25:58.480 [2024-12-09 18:15:21.365667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.480 [2024-12-09 18:15:21.365805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.480 [2024-12-09 18:15:21.365830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.480 [2024-12-09 18:15:21.365844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.480 [2024-12-09 18:15:21.365857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.480 [2024-12-09 18:15:21.365884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.480 qpair failed and we were unable to recover it. 00:25:58.480 [2024-12-09 18:15:21.375653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.480 [2024-12-09 18:15:21.375785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.480 [2024-12-09 18:15:21.375810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.480 [2024-12-09 18:15:21.375824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.480 [2024-12-09 18:15:21.375835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.480 [2024-12-09 18:15:21.375863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.480 qpair failed and we were unable to recover it. 00:25:58.480 [2024-12-09 18:15:21.385665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.480 [2024-12-09 18:15:21.385749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.480 [2024-12-09 18:15:21.385773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.480 [2024-12-09 18:15:21.385787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.481 [2024-12-09 18:15:21.385799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.481 [2024-12-09 18:15:21.385827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.481 qpair failed and we were unable to recover it. 00:25:58.481 [2024-12-09 18:15:21.395701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.481 [2024-12-09 18:15:21.395785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.481 [2024-12-09 18:15:21.395814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.481 [2024-12-09 18:15:21.395830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.481 [2024-12-09 18:15:21.395843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.481 [2024-12-09 18:15:21.395871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.481 qpair failed and we were unable to recover it. 00:25:58.481 [2024-12-09 18:15:21.405731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.481 [2024-12-09 18:15:21.405822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.481 [2024-12-09 18:15:21.405848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.481 [2024-12-09 18:15:21.405862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.481 [2024-12-09 18:15:21.405874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.481 [2024-12-09 18:15:21.405902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.481 qpair failed and we were unable to recover it. 00:25:58.481 [2024-12-09 18:15:21.415763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.481 [2024-12-09 18:15:21.415855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.481 [2024-12-09 18:15:21.415885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.481 [2024-12-09 18:15:21.415901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.481 [2024-12-09 18:15:21.415912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.481 [2024-12-09 18:15:21.415940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.481 qpair failed and we were unable to recover it. 00:25:58.481 [2024-12-09 18:15:21.425820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.481 [2024-12-09 18:15:21.425917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.481 [2024-12-09 18:15:21.425942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.481 [2024-12-09 18:15:21.425956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.481 [2024-12-09 18:15:21.425969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.481 [2024-12-09 18:15:21.425996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.481 qpair failed and we were unable to recover it. 00:25:58.481 [2024-12-09 18:15:21.435804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.481 [2024-12-09 18:15:21.435887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.481 [2024-12-09 18:15:21.435915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.481 [2024-12-09 18:15:21.435929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.481 [2024-12-09 18:15:21.435941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.481 [2024-12-09 18:15:21.435969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.481 qpair failed and we were unable to recover it. 00:25:58.481 [2024-12-09 18:15:21.445875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.481 [2024-12-09 18:15:21.445964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.481 [2024-12-09 18:15:21.445989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.481 [2024-12-09 18:15:21.446003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.481 [2024-12-09 18:15:21.446015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.481 [2024-12-09 18:15:21.446043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.481 qpair failed and we were unable to recover it. 00:25:58.481 [2024-12-09 18:15:21.455906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.481 [2024-12-09 18:15:21.456039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.481 [2024-12-09 18:15:21.456064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.481 [2024-12-09 18:15:21.456078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.481 [2024-12-09 18:15:21.456095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.481 [2024-12-09 18:15:21.456124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.481 qpair failed and we were unable to recover it. 00:25:58.481 [2024-12-09 18:15:21.465965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.481 [2024-12-09 18:15:21.466054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.481 [2024-12-09 18:15:21.466079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.481 [2024-12-09 18:15:21.466101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.481 [2024-12-09 18:15:21.466121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.481 [2024-12-09 18:15:21.466161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.481 qpair failed and we were unable to recover it. 00:25:58.481 [2024-12-09 18:15:21.475959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.481 [2024-12-09 18:15:21.476049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.481 [2024-12-09 18:15:21.476077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.481 [2024-12-09 18:15:21.476092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.481 [2024-12-09 18:15:21.476104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.481 [2024-12-09 18:15:21.476132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.481 qpair failed and we were unable to recover it. 00:25:58.481 [2024-12-09 18:15:21.485987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.481 [2024-12-09 18:15:21.486106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.481 [2024-12-09 18:15:21.486132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.481 [2024-12-09 18:15:21.486146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.481 [2024-12-09 18:15:21.486158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.481 [2024-12-09 18:15:21.486186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.481 qpair failed and we were unable to recover it. 00:25:58.481 [2024-12-09 18:15:21.496128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.481 [2024-12-09 18:15:21.496216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.481 [2024-12-09 18:15:21.496241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.481 [2024-12-09 18:15:21.496255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.481 [2024-12-09 18:15:21.496267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.481 [2024-12-09 18:15:21.496294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.481 qpair failed and we were unable to recover it. 00:25:58.481 [2024-12-09 18:15:21.506057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.481 [2024-12-09 18:15:21.506139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.481 [2024-12-09 18:15:21.506165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.481 [2024-12-09 18:15:21.506179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.481 [2024-12-09 18:15:21.506190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.481 [2024-12-09 18:15:21.506218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.481 qpair failed and we were unable to recover it. 00:25:58.481 [2024-12-09 18:15:21.516155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.481 [2024-12-09 18:15:21.516245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.481 [2024-12-09 18:15:21.516273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.481 [2024-12-09 18:15:21.516289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.481 [2024-12-09 18:15:21.516301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.481 [2024-12-09 18:15:21.516329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.482 qpair failed and we were unable to recover it. 00:25:58.741 [2024-12-09 18:15:21.526175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.741 [2024-12-09 18:15:21.526287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.741 [2024-12-09 18:15:21.526314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.741 [2024-12-09 18:15:21.526328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.741 [2024-12-09 18:15:21.526340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.741 [2024-12-09 18:15:21.526368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.741 qpair failed and we were unable to recover it. 00:25:58.741 [2024-12-09 18:15:21.536122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.741 [2024-12-09 18:15:21.536212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.741 [2024-12-09 18:15:21.536237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.741 [2024-12-09 18:15:21.536252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.741 [2024-12-09 18:15:21.536264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.741 [2024-12-09 18:15:21.536291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.741 qpair failed and we were unable to recover it. 00:25:58.741 [2024-12-09 18:15:21.546147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.741 [2024-12-09 18:15:21.546231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.741 [2024-12-09 18:15:21.546262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.741 [2024-12-09 18:15:21.546277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.741 [2024-12-09 18:15:21.546289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.741 [2024-12-09 18:15:21.546317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.741 qpair failed and we were unable to recover it. 00:25:58.741 [2024-12-09 18:15:21.556180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.741 [2024-12-09 18:15:21.556262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.741 [2024-12-09 18:15:21.556288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.741 [2024-12-09 18:15:21.556301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.741 [2024-12-09 18:15:21.556314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.741 [2024-12-09 18:15:21.556341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.741 qpair failed and we were unable to recover it. 00:25:58.741 [2024-12-09 18:15:21.566207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.741 [2024-12-09 18:15:21.566296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.741 [2024-12-09 18:15:21.566321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.741 [2024-12-09 18:15:21.566335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.741 [2024-12-09 18:15:21.566347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.741 [2024-12-09 18:15:21.566375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.741 qpair failed and we were unable to recover it. 00:25:58.741 [2024-12-09 18:15:21.576255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.741 [2024-12-09 18:15:21.576337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.741 [2024-12-09 18:15:21.576362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.741 [2024-12-09 18:15:21.576376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.741 [2024-12-09 18:15:21.576387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.741 [2024-12-09 18:15:21.576415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.741 qpair failed and we were unable to recover it. 00:25:58.741 [2024-12-09 18:15:21.586272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.741 [2024-12-09 18:15:21.586393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.741 [2024-12-09 18:15:21.586418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.741 [2024-12-09 18:15:21.586431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.741 [2024-12-09 18:15:21.586449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.741 [2024-12-09 18:15:21.586479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.741 qpair failed and we were unable to recover it. 00:25:58.741 [2024-12-09 18:15:21.596389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.741 [2024-12-09 18:15:21.596507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.741 [2024-12-09 18:15:21.596533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.741 [2024-12-09 18:15:21.596554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.741 [2024-12-09 18:15:21.596578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.741 [2024-12-09 18:15:21.596609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.741 qpair failed and we were unable to recover it. 00:25:58.741 [2024-12-09 18:15:21.606397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.741 [2024-12-09 18:15:21.606491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.741 [2024-12-09 18:15:21.606516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.741 [2024-12-09 18:15:21.606530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.741 [2024-12-09 18:15:21.606542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.741 [2024-12-09 18:15:21.606578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.741 qpair failed and we were unable to recover it. 00:25:58.741 [2024-12-09 18:15:21.616356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.741 [2024-12-09 18:15:21.616452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.741 [2024-12-09 18:15:21.616477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.741 [2024-12-09 18:15:21.616491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.741 [2024-12-09 18:15:21.616503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.741 [2024-12-09 18:15:21.616530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.741 qpair failed and we were unable to recover it. 00:25:58.741 [2024-12-09 18:15:21.626407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.741 [2024-12-09 18:15:21.626488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.741 [2024-12-09 18:15:21.626513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.741 [2024-12-09 18:15:21.626527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.741 [2024-12-09 18:15:21.626539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.741 [2024-12-09 18:15:21.626577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.742 qpair failed and we were unable to recover it. 00:25:58.742 [2024-12-09 18:15:21.636409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.742 [2024-12-09 18:15:21.636488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.742 [2024-12-09 18:15:21.636513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.742 [2024-12-09 18:15:21.636527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.742 [2024-12-09 18:15:21.636539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.742 [2024-12-09 18:15:21.636576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.742 qpair failed and we were unable to recover it. 00:25:58.742 [2024-12-09 18:15:21.646456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.742 [2024-12-09 18:15:21.646556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.742 [2024-12-09 18:15:21.646586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.742 [2024-12-09 18:15:21.646601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.742 [2024-12-09 18:15:21.646613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.742 [2024-12-09 18:15:21.646642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.742 qpair failed and we were unable to recover it. 00:25:58.742 [2024-12-09 18:15:21.656479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.742 [2024-12-09 18:15:21.656606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.742 [2024-12-09 18:15:21.656633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.742 [2024-12-09 18:15:21.656648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.742 [2024-12-09 18:15:21.656660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.742 [2024-12-09 18:15:21.656688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.742 qpair failed and we were unable to recover it. 00:25:58.742 [2024-12-09 18:15:21.666505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.742 [2024-12-09 18:15:21.666599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.742 [2024-12-09 18:15:21.666625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.742 [2024-12-09 18:15:21.666639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.742 [2024-12-09 18:15:21.666651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.742 [2024-12-09 18:15:21.666678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.742 qpair failed and we were unable to recover it. 00:25:58.742 [2024-12-09 18:15:21.676538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.742 [2024-12-09 18:15:21.676634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.742 [2024-12-09 18:15:21.676665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.742 [2024-12-09 18:15:21.676680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.742 [2024-12-09 18:15:21.676691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.742 [2024-12-09 18:15:21.676719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.742 qpair failed and we were unable to recover it. 00:25:58.742 [2024-12-09 18:15:21.686671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.742 [2024-12-09 18:15:21.686764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.742 [2024-12-09 18:15:21.686792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.742 [2024-12-09 18:15:21.686809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.742 [2024-12-09 18:15:21.686821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.742 [2024-12-09 18:15:21.686850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.742 qpair failed and we were unable to recover it. 00:25:58.742 [2024-12-09 18:15:21.696593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.742 [2024-12-09 18:15:21.696675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.742 [2024-12-09 18:15:21.696701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.742 [2024-12-09 18:15:21.696714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.742 [2024-12-09 18:15:21.696726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.742 [2024-12-09 18:15:21.696754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.742 qpair failed and we were unable to recover it. 00:25:58.742 [2024-12-09 18:15:21.706687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.742 [2024-12-09 18:15:21.706794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.742 [2024-12-09 18:15:21.706820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.742 [2024-12-09 18:15:21.706835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.742 [2024-12-09 18:15:21.706846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.742 [2024-12-09 18:15:21.706874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.742 qpair failed and we were unable to recover it. 00:25:58.742 [2024-12-09 18:15:21.716660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.742 [2024-12-09 18:15:21.716774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.742 [2024-12-09 18:15:21.716800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.742 [2024-12-09 18:15:21.716814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.742 [2024-12-09 18:15:21.716831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.742 [2024-12-09 18:15:21.716860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.742 qpair failed and we were unable to recover it. 00:25:58.742 [2024-12-09 18:15:21.726690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.742 [2024-12-09 18:15:21.726793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.742 [2024-12-09 18:15:21.726819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.742 [2024-12-09 18:15:21.726834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.742 [2024-12-09 18:15:21.726846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.742 [2024-12-09 18:15:21.726875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.742 qpair failed and we were unable to recover it. 00:25:58.742 [2024-12-09 18:15:21.736757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.742 [2024-12-09 18:15:21.736879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.742 [2024-12-09 18:15:21.736905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.742 [2024-12-09 18:15:21.736920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.742 [2024-12-09 18:15:21.736932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.742 [2024-12-09 18:15:21.736959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.742 qpair failed and we were unable to recover it. 00:25:58.742 [2024-12-09 18:15:21.746709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.742 [2024-12-09 18:15:21.746789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.742 [2024-12-09 18:15:21.746814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.742 [2024-12-09 18:15:21.746828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.742 [2024-12-09 18:15:21.746840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.742 [2024-12-09 18:15:21.746867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.742 qpair failed and we were unable to recover it. 00:25:58.742 [2024-12-09 18:15:21.756762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.742 [2024-12-09 18:15:21.756850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.742 [2024-12-09 18:15:21.756874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.742 [2024-12-09 18:15:21.756888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.742 [2024-12-09 18:15:21.756901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.742 [2024-12-09 18:15:21.756929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.742 qpair failed and we were unable to recover it. 00:25:58.742 [2024-12-09 18:15:21.766828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.742 [2024-12-09 18:15:21.766941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.742 [2024-12-09 18:15:21.766966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.743 [2024-12-09 18:15:21.766980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.743 [2024-12-09 18:15:21.766992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.743 [2024-12-09 18:15:21.767020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.743 qpair failed and we were unable to recover it. 00:25:58.743 [2024-12-09 18:15:21.776862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.743 [2024-12-09 18:15:21.776949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.743 [2024-12-09 18:15:21.776979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.743 [2024-12-09 18:15:21.776999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.743 [2024-12-09 18:15:21.777012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:58.743 [2024-12-09 18:15:21.777045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.743 qpair failed and we were unable to recover it. 00:25:59.001 [2024-12-09 18:15:21.786855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.001 [2024-12-09 18:15:21.786938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.001 [2024-12-09 18:15:21.786964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.001 [2024-12-09 18:15:21.786978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.001 [2024-12-09 18:15:21.786990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.001 [2024-12-09 18:15:21.787019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.001 qpair failed and we were unable to recover it. 00:25:59.001 [2024-12-09 18:15:21.796893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.001 [2024-12-09 18:15:21.796981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.001 [2024-12-09 18:15:21.797006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.001 [2024-12-09 18:15:21.797020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.001 [2024-12-09 18:15:21.797032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.001 [2024-12-09 18:15:21.797060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.001 qpair failed and we were unable to recover it. 00:25:59.001 [2024-12-09 18:15:21.806994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.001 [2024-12-09 18:15:21.807096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.001 [2024-12-09 18:15:21.807127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.001 [2024-12-09 18:15:21.807141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.001 [2024-12-09 18:15:21.807153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.001 [2024-12-09 18:15:21.807181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.001 qpair failed and we were unable to recover it. 00:25:59.001 [2024-12-09 18:15:21.816927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.001 [2024-12-09 18:15:21.817013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.001 [2024-12-09 18:15:21.817038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.001 [2024-12-09 18:15:21.817052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.002 [2024-12-09 18:15:21.817064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.002 [2024-12-09 18:15:21.817092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.002 qpair failed and we were unable to recover it. 00:25:59.002 [2024-12-09 18:15:21.827550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.002 [2024-12-09 18:15:21.827663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.002 [2024-12-09 18:15:21.827688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.002 [2024-12-09 18:15:21.827702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.002 [2024-12-09 18:15:21.827714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.002 [2024-12-09 18:15:21.827742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.002 qpair failed and we were unable to recover it. 00:25:59.002 [2024-12-09 18:15:21.837032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.002 [2024-12-09 18:15:21.837116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.002 [2024-12-09 18:15:21.837141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.002 [2024-12-09 18:15:21.837154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.002 [2024-12-09 18:15:21.837166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.002 [2024-12-09 18:15:21.837194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.002 qpair failed and we were unable to recover it. 00:25:59.002 [2024-12-09 18:15:21.847051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.002 [2024-12-09 18:15:21.847141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.002 [2024-12-09 18:15:21.847166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.002 [2024-12-09 18:15:21.847180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.002 [2024-12-09 18:15:21.847201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.002 [2024-12-09 18:15:21.847229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.002 qpair failed and we were unable to recover it. 00:25:59.002 [2024-12-09 18:15:21.857088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.002 [2024-12-09 18:15:21.857170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.002 [2024-12-09 18:15:21.857195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.002 [2024-12-09 18:15:21.857209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.002 [2024-12-09 18:15:21.857221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.002 [2024-12-09 18:15:21.857249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.002 qpair failed and we were unable to recover it. 00:25:59.002 [2024-12-09 18:15:21.867068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.002 [2024-12-09 18:15:21.867153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.002 [2024-12-09 18:15:21.867178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.002 [2024-12-09 18:15:21.867192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.002 [2024-12-09 18:15:21.867204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.002 [2024-12-09 18:15:21.867231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.002 qpair failed and we were unable to recover it. 00:25:59.002 [2024-12-09 18:15:21.877085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.002 [2024-12-09 18:15:21.877219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.002 [2024-12-09 18:15:21.877244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.002 [2024-12-09 18:15:21.877257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.002 [2024-12-09 18:15:21.877269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.002 [2024-12-09 18:15:21.877297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.002 qpair failed and we were unable to recover it. 00:25:59.002 [2024-12-09 18:15:21.887104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.002 [2024-12-09 18:15:21.887197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.002 [2024-12-09 18:15:21.887222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.002 [2024-12-09 18:15:21.887236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.002 [2024-12-09 18:15:21.887248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.002 [2024-12-09 18:15:21.887275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.002 qpair failed and we were unable to recover it. 00:25:59.002 [2024-12-09 18:15:21.897138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.002 [2024-12-09 18:15:21.897222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.002 [2024-12-09 18:15:21.897247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.002 [2024-12-09 18:15:21.897261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.002 [2024-12-09 18:15:21.897273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.002 [2024-12-09 18:15:21.897300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.002 qpair failed and we were unable to recover it. 00:25:59.002 [2024-12-09 18:15:21.907169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.002 [2024-12-09 18:15:21.907256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.002 [2024-12-09 18:15:21.907282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.002 [2024-12-09 18:15:21.907295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.002 [2024-12-09 18:15:21.907307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.002 [2024-12-09 18:15:21.907335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.002 qpair failed and we were unable to recover it. 00:25:59.002 [2024-12-09 18:15:21.917318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.002 [2024-12-09 18:15:21.917451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.002 [2024-12-09 18:15:21.917475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.002 [2024-12-09 18:15:21.917489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.002 [2024-12-09 18:15:21.917500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.002 [2024-12-09 18:15:21.917528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.002 qpair failed and we were unable to recover it. 00:25:59.002 [2024-12-09 18:15:21.927254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.002 [2024-12-09 18:15:21.927348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.002 [2024-12-09 18:15:21.927373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.002 [2024-12-09 18:15:21.927387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.002 [2024-12-09 18:15:21.927398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.002 [2024-12-09 18:15:21.927426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.002 qpair failed and we were unable to recover it. 00:25:59.002 [2024-12-09 18:15:21.937231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.002 [2024-12-09 18:15:21.937327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.002 [2024-12-09 18:15:21.937357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.002 [2024-12-09 18:15:21.937372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.002 [2024-12-09 18:15:21.937384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.002 [2024-12-09 18:15:21.937411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.002 qpair failed and we were unable to recover it. 00:25:59.002 [2024-12-09 18:15:21.947250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.002 [2024-12-09 18:15:21.947350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.002 [2024-12-09 18:15:21.947375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.002 [2024-12-09 18:15:21.947389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.002 [2024-12-09 18:15:21.947401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.002 [2024-12-09 18:15:21.947428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.002 qpair failed and we were unable to recover it. 00:25:59.003 [2024-12-09 18:15:21.957392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.003 [2024-12-09 18:15:21.957476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.003 [2024-12-09 18:15:21.957501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.003 [2024-12-09 18:15:21.957516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.003 [2024-12-09 18:15:21.957527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.003 [2024-12-09 18:15:21.957562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.003 qpair failed and we were unable to recover it. 00:25:59.003 [2024-12-09 18:15:21.967342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.003 [2024-12-09 18:15:21.967437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.003 [2024-12-09 18:15:21.967462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.003 [2024-12-09 18:15:21.967482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.003 [2024-12-09 18:15:21.967502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.003 [2024-12-09 18:15:21.967543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.003 qpair failed and we were unable to recover it. 00:25:59.003 [2024-12-09 18:15:21.977362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.003 [2024-12-09 18:15:21.977454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.003 [2024-12-09 18:15:21.977482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.003 [2024-12-09 18:15:21.977496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.003 [2024-12-09 18:15:21.977514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.003 [2024-12-09 18:15:21.977551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.003 qpair failed and we were unable to recover it. 00:25:59.003 [2024-12-09 18:15:21.987376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.003 [2024-12-09 18:15:21.987494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.003 [2024-12-09 18:15:21.987520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.003 [2024-12-09 18:15:21.987534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.003 [2024-12-09 18:15:21.987553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.003 [2024-12-09 18:15:21.987583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.003 qpair failed and we were unable to recover it. 00:25:59.003 [2024-12-09 18:15:21.997403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.003 [2024-12-09 18:15:21.997492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.003 [2024-12-09 18:15:21.997517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.003 [2024-12-09 18:15:21.997531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.003 [2024-12-09 18:15:21.997543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.003 [2024-12-09 18:15:21.997584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.003 qpair failed and we were unable to recover it. 00:25:59.003 [2024-12-09 18:15:22.007457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.003 [2024-12-09 18:15:22.007559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.003 [2024-12-09 18:15:22.007584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.003 [2024-12-09 18:15:22.007598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.003 [2024-12-09 18:15:22.007611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.003 [2024-12-09 18:15:22.007638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.003 qpair failed and we were unable to recover it. 00:25:59.003 [2024-12-09 18:15:22.017466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.003 [2024-12-09 18:15:22.017560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.003 [2024-12-09 18:15:22.017585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.003 [2024-12-09 18:15:22.017599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.003 [2024-12-09 18:15:22.017611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.003 [2024-12-09 18:15:22.017639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.003 qpair failed and we were unable to recover it. 00:25:59.003 [2024-12-09 18:15:22.027507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.003 [2024-12-09 18:15:22.027624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.003 [2024-12-09 18:15:22.027649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.003 [2024-12-09 18:15:22.027662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.003 [2024-12-09 18:15:22.027674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.003 [2024-12-09 18:15:22.027702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.003 qpair failed and we were unable to recover it. 00:25:59.003 [2024-12-09 18:15:22.037563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.003 [2024-12-09 18:15:22.037685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.003 [2024-12-09 18:15:22.037713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.003 [2024-12-09 18:15:22.037728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.003 [2024-12-09 18:15:22.037740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.003 [2024-12-09 18:15:22.037768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.003 qpair failed and we were unable to recover it. 00:25:59.264 [2024-12-09 18:15:22.047576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.264 [2024-12-09 18:15:22.047686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.264 [2024-12-09 18:15:22.047712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.264 [2024-12-09 18:15:22.047727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.264 [2024-12-09 18:15:22.047739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.264 [2024-12-09 18:15:22.047768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.264 qpair failed and we were unable to recover it. 00:25:59.264 [2024-12-09 18:15:22.057601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.264 [2024-12-09 18:15:22.057687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.264 [2024-12-09 18:15:22.057713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.264 [2024-12-09 18:15:22.057727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.264 [2024-12-09 18:15:22.057739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.264 [2024-12-09 18:15:22.057767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.264 qpair failed and we were unable to recover it. 00:25:59.264 [2024-12-09 18:15:22.067608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.264 [2024-12-09 18:15:22.067720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.264 [2024-12-09 18:15:22.067750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.264 [2024-12-09 18:15:22.067765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.264 [2024-12-09 18:15:22.067777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.264 [2024-12-09 18:15:22.067805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.264 qpair failed and we were unable to recover it. 00:25:59.264 [2024-12-09 18:15:22.077634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.264 [2024-12-09 18:15:22.077758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.264 [2024-12-09 18:15:22.077784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.264 [2024-12-09 18:15:22.077798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.264 [2024-12-09 18:15:22.077809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.264 [2024-12-09 18:15:22.077837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.264 qpair failed and we were unable to recover it. 00:25:59.264 [2024-12-09 18:15:22.087691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.264 [2024-12-09 18:15:22.087787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.264 [2024-12-09 18:15:22.087815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.264 [2024-12-09 18:15:22.087831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.264 [2024-12-09 18:15:22.087843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.264 [2024-12-09 18:15:22.087873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.264 qpair failed and we were unable to recover it. 00:25:59.264 [2024-12-09 18:15:22.097687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.265 [2024-12-09 18:15:22.097784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.265 [2024-12-09 18:15:22.097810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.265 [2024-12-09 18:15:22.097824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.265 [2024-12-09 18:15:22.097836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.265 [2024-12-09 18:15:22.097864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.265 qpair failed and we were unable to recover it. 00:25:59.265 [2024-12-09 18:15:22.107714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.265 [2024-12-09 18:15:22.107796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.265 [2024-12-09 18:15:22.107821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.265 [2024-12-09 18:15:22.107835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.265 [2024-12-09 18:15:22.107852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.265 [2024-12-09 18:15:22.107880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.265 qpair failed and we were unable to recover it. 00:25:59.265 [2024-12-09 18:15:22.117750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.265 [2024-12-09 18:15:22.117838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.265 [2024-12-09 18:15:22.117863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.265 [2024-12-09 18:15:22.117877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.265 [2024-12-09 18:15:22.117889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.265 [2024-12-09 18:15:22.117917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.265 qpair failed and we were unable to recover it. 00:25:59.265 [2024-12-09 18:15:22.127806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.265 [2024-12-09 18:15:22.127913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.265 [2024-12-09 18:15:22.127938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.265 [2024-12-09 18:15:22.127952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.265 [2024-12-09 18:15:22.127963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.265 [2024-12-09 18:15:22.127991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.265 qpair failed and we were unable to recover it. 00:25:59.265 [2024-12-09 18:15:22.137820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.265 [2024-12-09 18:15:22.137914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.265 [2024-12-09 18:15:22.137940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.265 [2024-12-09 18:15:22.137954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.265 [2024-12-09 18:15:22.137967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.265 [2024-12-09 18:15:22.137994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.265 qpair failed and we were unable to recover it. 00:25:59.265 [2024-12-09 18:15:22.147935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.265 [2024-12-09 18:15:22.148016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.265 [2024-12-09 18:15:22.148042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.265 [2024-12-09 18:15:22.148055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.265 [2024-12-09 18:15:22.148067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.265 [2024-12-09 18:15:22.148094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.265 qpair failed and we were unable to recover it. 00:25:59.265 [2024-12-09 18:15:22.157991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.265 [2024-12-09 18:15:22.158081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.265 [2024-12-09 18:15:22.158106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.265 [2024-12-09 18:15:22.158121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.265 [2024-12-09 18:15:22.158133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.265 [2024-12-09 18:15:22.158160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.265 qpair failed and we were unable to recover it. 00:25:59.265 [2024-12-09 18:15:22.167975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.265 [2024-12-09 18:15:22.168065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.265 [2024-12-09 18:15:22.168090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.265 [2024-12-09 18:15:22.168104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.265 [2024-12-09 18:15:22.168115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.265 [2024-12-09 18:15:22.168143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.265 qpair failed and we were unable to recover it. 00:25:59.265 [2024-12-09 18:15:22.177927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.265 [2024-12-09 18:15:22.178065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.265 [2024-12-09 18:15:22.178091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.265 [2024-12-09 18:15:22.178105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.265 [2024-12-09 18:15:22.178116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.265 [2024-12-09 18:15:22.178143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.265 qpair failed and we were unable to recover it. 00:25:59.265 [2024-12-09 18:15:22.187988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.265 [2024-12-09 18:15:22.188106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.265 [2024-12-09 18:15:22.188131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.265 [2024-12-09 18:15:22.188145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.265 [2024-12-09 18:15:22.188156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.265 [2024-12-09 18:15:22.188184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.265 qpair failed and we were unable to recover it. 00:25:59.265 [2024-12-09 18:15:22.197969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.265 [2024-12-09 18:15:22.198061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.265 [2024-12-09 18:15:22.198091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.265 [2024-12-09 18:15:22.198106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.265 [2024-12-09 18:15:22.198118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.265 [2024-12-09 18:15:22.198146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.265 qpair failed and we were unable to recover it. 00:25:59.265 [2024-12-09 18:15:22.208016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.265 [2024-12-09 18:15:22.208109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.265 [2024-12-09 18:15:22.208134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.265 [2024-12-09 18:15:22.208148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.265 [2024-12-09 18:15:22.208160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.265 [2024-12-09 18:15:22.208188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.265 qpair failed and we were unable to recover it. 00:25:59.265 [2024-12-09 18:15:22.218048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.265 [2024-12-09 18:15:22.218163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.265 [2024-12-09 18:15:22.218188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.265 [2024-12-09 18:15:22.218203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.266 [2024-12-09 18:15:22.218214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.266 [2024-12-09 18:15:22.218242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.266 qpair failed and we were unable to recover it. 00:25:59.266 [2024-12-09 18:15:22.228093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.266 [2024-12-09 18:15:22.228174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.266 [2024-12-09 18:15:22.228202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.266 [2024-12-09 18:15:22.228217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.266 [2024-12-09 18:15:22.228229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.266 [2024-12-09 18:15:22.228258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.266 qpair failed and we were unable to recover it. 00:25:59.266 [2024-12-09 18:15:22.238089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.266 [2024-12-09 18:15:22.238172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.266 [2024-12-09 18:15:22.238198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.266 [2024-12-09 18:15:22.238217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.266 [2024-12-09 18:15:22.238231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.266 [2024-12-09 18:15:22.238259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.266 qpair failed and we were unable to recover it. 00:25:59.266 [2024-12-09 18:15:22.248129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.266 [2024-12-09 18:15:22.248223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.266 [2024-12-09 18:15:22.248248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.266 [2024-12-09 18:15:22.248261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.266 [2024-12-09 18:15:22.248273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.266 [2024-12-09 18:15:22.248301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.266 qpair failed and we were unable to recover it. 00:25:59.266 [2024-12-09 18:15:22.258174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.266 [2024-12-09 18:15:22.258262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.266 [2024-12-09 18:15:22.258289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.266 [2024-12-09 18:15:22.258303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.266 [2024-12-09 18:15:22.258315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.266 [2024-12-09 18:15:22.258343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.266 qpair failed and we were unable to recover it. 00:25:59.266 [2024-12-09 18:15:22.268236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.266 [2024-12-09 18:15:22.268335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.266 [2024-12-09 18:15:22.268361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.266 [2024-12-09 18:15:22.268376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.266 [2024-12-09 18:15:22.268388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.266 [2024-12-09 18:15:22.268416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.266 qpair failed and we were unable to recover it. 00:25:59.266 [2024-12-09 18:15:22.278203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.266 [2024-12-09 18:15:22.278295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.266 [2024-12-09 18:15:22.278321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.266 [2024-12-09 18:15:22.278335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.266 [2024-12-09 18:15:22.278347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.266 [2024-12-09 18:15:22.278375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.266 qpair failed and we were unable to recover it. 00:25:59.266 [2024-12-09 18:15:22.288229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.266 [2024-12-09 18:15:22.288320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.266 [2024-12-09 18:15:22.288346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.266 [2024-12-09 18:15:22.288360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.266 [2024-12-09 18:15:22.288372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.266 [2024-12-09 18:15:22.288400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.266 qpair failed and we were unable to recover it. 00:25:59.266 [2024-12-09 18:15:22.298282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.266 [2024-12-09 18:15:22.298369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.266 [2024-12-09 18:15:22.298395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.266 [2024-12-09 18:15:22.298409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.266 [2024-12-09 18:15:22.298421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.266 [2024-12-09 18:15:22.298448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.266 qpair failed and we were unable to recover it. 00:25:59.528 [2024-12-09 18:15:22.308295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.528 [2024-12-09 18:15:22.308402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.528 [2024-12-09 18:15:22.308429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.528 [2024-12-09 18:15:22.308443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.528 [2024-12-09 18:15:22.308460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.528 [2024-12-09 18:15:22.308490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.528 qpair failed and we were unable to recover it. 00:25:59.528 [2024-12-09 18:15:22.318306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.528 [2024-12-09 18:15:22.318400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.528 [2024-12-09 18:15:22.318426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.528 [2024-12-09 18:15:22.318441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.528 [2024-12-09 18:15:22.318453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.528 [2024-12-09 18:15:22.318480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.528 qpair failed and we were unable to recover it. 00:25:59.528 [2024-12-09 18:15:22.328381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.528 [2024-12-09 18:15:22.328468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.528 [2024-12-09 18:15:22.328499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.528 [2024-12-09 18:15:22.328514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.528 [2024-12-09 18:15:22.328526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.528 [2024-12-09 18:15:22.328564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.528 qpair failed and we were unable to recover it. 00:25:59.528 [2024-12-09 18:15:22.338384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.528 [2024-12-09 18:15:22.338472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.528 [2024-12-09 18:15:22.338497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.528 [2024-12-09 18:15:22.338511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.528 [2024-12-09 18:15:22.338523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.528 [2024-12-09 18:15:22.338557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.528 qpair failed and we were unable to recover it. 00:25:59.528 [2024-12-09 18:15:22.348395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.528 [2024-12-09 18:15:22.348486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.528 [2024-12-09 18:15:22.348511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.528 [2024-12-09 18:15:22.348525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.528 [2024-12-09 18:15:22.348538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.528 [2024-12-09 18:15:22.348574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.528 qpair failed and we were unable to recover it. 00:25:59.528 [2024-12-09 18:15:22.358513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.528 [2024-12-09 18:15:22.358606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.528 [2024-12-09 18:15:22.358634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.528 [2024-12-09 18:15:22.358652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.528 [2024-12-09 18:15:22.358664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.528 [2024-12-09 18:15:22.358692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.528 qpair failed and we were unable to recover it. 00:25:59.528 [2024-12-09 18:15:22.368472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.528 [2024-12-09 18:15:22.368571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.528 [2024-12-09 18:15:22.368597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.528 [2024-12-09 18:15:22.368617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.528 [2024-12-09 18:15:22.368630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.528 [2024-12-09 18:15:22.368658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.528 qpair failed and we were unable to recover it. 00:25:59.528 [2024-12-09 18:15:22.378496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.528 [2024-12-09 18:15:22.378592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.528 [2024-12-09 18:15:22.378618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.528 [2024-12-09 18:15:22.378632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.528 [2024-12-09 18:15:22.378644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.528 [2024-12-09 18:15:22.378671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.528 qpair failed and we were unable to recover it. 00:25:59.528 [2024-12-09 18:15:22.388524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.528 [2024-12-09 18:15:22.388627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.528 [2024-12-09 18:15:22.388653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.528 [2024-12-09 18:15:22.388667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.528 [2024-12-09 18:15:22.388678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.528 [2024-12-09 18:15:22.388707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.528 qpair failed and we were unable to recover it. 00:25:59.528 [2024-12-09 18:15:22.398541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.528 [2024-12-09 18:15:22.398640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.528 [2024-12-09 18:15:22.398665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.528 [2024-12-09 18:15:22.398679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.528 [2024-12-09 18:15:22.398690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.528 [2024-12-09 18:15:22.398718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.528 qpair failed and we were unable to recover it. 00:25:59.528 [2024-12-09 18:15:22.408585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.528 [2024-12-09 18:15:22.408672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.528 [2024-12-09 18:15:22.408697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.528 [2024-12-09 18:15:22.408711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.528 [2024-12-09 18:15:22.408723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.528 [2024-12-09 18:15:22.408751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.528 qpair failed and we were unable to recover it. 00:25:59.528 [2024-12-09 18:15:22.418611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.528 [2024-12-09 18:15:22.418706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.528 [2024-12-09 18:15:22.418732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.528 [2024-12-09 18:15:22.418746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.528 [2024-12-09 18:15:22.418757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.528 [2024-12-09 18:15:22.418785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.528 qpair failed and we were unable to recover it. 00:25:59.528 [2024-12-09 18:15:22.428664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.528 [2024-12-09 18:15:22.428752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.528 [2024-12-09 18:15:22.428777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.528 [2024-12-09 18:15:22.428792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.529 [2024-12-09 18:15:22.428803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.529 [2024-12-09 18:15:22.428831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.529 qpair failed and we were unable to recover it. 00:25:59.529 [2024-12-09 18:15:22.438713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.529 [2024-12-09 18:15:22.438799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.529 [2024-12-09 18:15:22.438825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.529 [2024-12-09 18:15:22.438840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.529 [2024-12-09 18:15:22.438852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.529 [2024-12-09 18:15:22.438879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.529 qpair failed and we were unable to recover it. 00:25:59.529 [2024-12-09 18:15:22.448740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.529 [2024-12-09 18:15:22.448837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.529 [2024-12-09 18:15:22.448862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.529 [2024-12-09 18:15:22.448876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.529 [2024-12-09 18:15:22.448888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.529 [2024-12-09 18:15:22.448916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.529 qpair failed and we were unable to recover it. 00:25:59.529 [2024-12-09 18:15:22.458743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.529 [2024-12-09 18:15:22.458856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.529 [2024-12-09 18:15:22.458887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.529 [2024-12-09 18:15:22.458901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.529 [2024-12-09 18:15:22.458913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.529 [2024-12-09 18:15:22.458941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.529 qpair failed and we were unable to recover it. 00:25:59.529 [2024-12-09 18:15:22.468729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.529 [2024-12-09 18:15:22.468815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.529 [2024-12-09 18:15:22.468839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.529 [2024-12-09 18:15:22.468852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.529 [2024-12-09 18:15:22.468864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.529 [2024-12-09 18:15:22.468891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.529 qpair failed and we were unable to recover it. 00:25:59.529 [2024-12-09 18:15:22.478769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.529 [2024-12-09 18:15:22.478868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.529 [2024-12-09 18:15:22.478896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.529 [2024-12-09 18:15:22.478911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.529 [2024-12-09 18:15:22.478923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.529 [2024-12-09 18:15:22.478952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.529 qpair failed and we were unable to recover it. 00:25:59.529 [2024-12-09 18:15:22.488814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.529 [2024-12-09 18:15:22.488936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.529 [2024-12-09 18:15:22.488961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.529 [2024-12-09 18:15:22.488975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.529 [2024-12-09 18:15:22.488988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.529 [2024-12-09 18:15:22.489015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.529 qpair failed and we were unable to recover it. 00:25:59.529 [2024-12-09 18:15:22.498844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.529 [2024-12-09 18:15:22.498929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.529 [2024-12-09 18:15:22.498955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.529 [2024-12-09 18:15:22.498978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.529 [2024-12-09 18:15:22.498991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.529 [2024-12-09 18:15:22.499019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.529 qpair failed and we were unable to recover it. 00:25:59.529 [2024-12-09 18:15:22.509031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.529 [2024-12-09 18:15:22.509155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.529 [2024-12-09 18:15:22.509180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.529 [2024-12-09 18:15:22.509194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.529 [2024-12-09 18:15:22.509206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.529 [2024-12-09 18:15:22.509234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.529 qpair failed and we were unable to recover it. 00:25:59.529 [2024-12-09 18:15:22.518957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.529 [2024-12-09 18:15:22.519040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.529 [2024-12-09 18:15:22.519065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.529 [2024-12-09 18:15:22.519079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.529 [2024-12-09 18:15:22.519091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.529 [2024-12-09 18:15:22.519119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.529 qpair failed and we were unable to recover it. 00:25:59.529 [2024-12-09 18:15:22.528915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.529 [2024-12-09 18:15:22.529003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.529 [2024-12-09 18:15:22.529028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.529 [2024-12-09 18:15:22.529043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.529 [2024-12-09 18:15:22.529055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.529 [2024-12-09 18:15:22.529082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.529 qpair failed and we were unable to recover it. 00:25:59.529 [2024-12-09 18:15:22.538937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.529 [2024-12-09 18:15:22.539058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.529 [2024-12-09 18:15:22.539083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.529 [2024-12-09 18:15:22.539097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.529 [2024-12-09 18:15:22.539109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.529 [2024-12-09 18:15:22.539136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.529 qpair failed and we were unable to recover it. 00:25:59.529 [2024-12-09 18:15:22.549005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.529 [2024-12-09 18:15:22.549089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.529 [2024-12-09 18:15:22.549115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.529 [2024-12-09 18:15:22.549129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.529 [2024-12-09 18:15:22.549141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.529 [2024-12-09 18:15:22.549168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.529 qpair failed and we were unable to recover it. 00:25:59.529 [2024-12-09 18:15:22.559050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.529 [2024-12-09 18:15:22.559161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.529 [2024-12-09 18:15:22.559185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.529 [2024-12-09 18:15:22.559199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.529 [2024-12-09 18:15:22.559211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.529 [2024-12-09 18:15:22.559240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.529 qpair failed and we were unable to recover it. 00:25:59.790 [2024-12-09 18:15:22.569031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.790 [2024-12-09 18:15:22.569125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.790 [2024-12-09 18:15:22.569151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.790 [2024-12-09 18:15:22.569166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.790 [2024-12-09 18:15:22.569179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.790 [2024-12-09 18:15:22.569207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.790 qpair failed and we were unable to recover it. 00:25:59.790 [2024-12-09 18:15:22.579051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.790 [2024-12-09 18:15:22.579138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.790 [2024-12-09 18:15:22.579164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.790 [2024-12-09 18:15:22.579179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.790 [2024-12-09 18:15:22.579191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.790 [2024-12-09 18:15:22.579219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.790 qpair failed and we were unable to recover it. 00:25:59.790 [2024-12-09 18:15:22.589198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.790 [2024-12-09 18:15:22.589334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.790 [2024-12-09 18:15:22.589364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.790 [2024-12-09 18:15:22.589379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.790 [2024-12-09 18:15:22.589391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.790 [2024-12-09 18:15:22.589418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.790 qpair failed and we were unable to recover it. 00:25:59.790 [2024-12-09 18:15:22.599085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.790 [2024-12-09 18:15:22.599166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.790 [2024-12-09 18:15:22.599191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.790 [2024-12-09 18:15:22.599205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.790 [2024-12-09 18:15:22.599217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.790 [2024-12-09 18:15:22.599244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.790 qpair failed and we were unable to recover it. 00:25:59.790 [2024-12-09 18:15:22.609144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.790 [2024-12-09 18:15:22.609236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.790 [2024-12-09 18:15:22.609262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.790 [2024-12-09 18:15:22.609275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.790 [2024-12-09 18:15:22.609287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.790 [2024-12-09 18:15:22.609314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.790 qpair failed and we were unable to recover it. 00:25:59.790 [2024-12-09 18:15:22.619226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.790 [2024-12-09 18:15:22.619323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.790 [2024-12-09 18:15:22.619349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.790 [2024-12-09 18:15:22.619364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.790 [2024-12-09 18:15:22.619376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.790 [2024-12-09 18:15:22.619404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.791 qpair failed and we were unable to recover it. 00:25:59.791 [2024-12-09 18:15:22.629201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.791 [2024-12-09 18:15:22.629284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.791 [2024-12-09 18:15:22.629310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.791 [2024-12-09 18:15:22.629329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.791 [2024-12-09 18:15:22.629342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.791 [2024-12-09 18:15:22.629371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.791 qpair failed and we were unable to recover it. 00:25:59.791 [2024-12-09 18:15:22.639217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.791 [2024-12-09 18:15:22.639323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.791 [2024-12-09 18:15:22.639348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.791 [2024-12-09 18:15:22.639362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.791 [2024-12-09 18:15:22.639374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.791 [2024-12-09 18:15:22.639403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.791 qpair failed and we were unable to recover it. 00:25:59.791 [2024-12-09 18:15:22.649261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.791 [2024-12-09 18:15:22.649354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.791 [2024-12-09 18:15:22.649379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.791 [2024-12-09 18:15:22.649392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.791 [2024-12-09 18:15:22.649404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.791 [2024-12-09 18:15:22.649432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.791 qpair failed and we were unable to recover it. 00:25:59.791 [2024-12-09 18:15:22.659320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.791 [2024-12-09 18:15:22.659451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.791 [2024-12-09 18:15:22.659476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.791 [2024-12-09 18:15:22.659490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.791 [2024-12-09 18:15:22.659501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.791 [2024-12-09 18:15:22.659530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.791 qpair failed and we were unable to recover it. 00:25:59.791 [2024-12-09 18:15:22.669303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.791 [2024-12-09 18:15:22.669398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.791 [2024-12-09 18:15:22.669424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.791 [2024-12-09 18:15:22.669438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.791 [2024-12-09 18:15:22.669450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.791 [2024-12-09 18:15:22.669478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.791 qpair failed and we were unable to recover it. 00:25:59.791 [2024-12-09 18:15:22.679371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.791 [2024-12-09 18:15:22.679488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.791 [2024-12-09 18:15:22.679513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.791 [2024-12-09 18:15:22.679527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.791 [2024-12-09 18:15:22.679539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.791 [2024-12-09 18:15:22.679575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.791 qpair failed and we were unable to recover it. 00:25:59.791 [2024-12-09 18:15:22.689464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.791 [2024-12-09 18:15:22.689563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.791 [2024-12-09 18:15:22.689589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.791 [2024-12-09 18:15:22.689603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.791 [2024-12-09 18:15:22.689614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.791 [2024-12-09 18:15:22.689642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.791 qpair failed and we were unable to recover it. 00:25:59.791 [2024-12-09 18:15:22.699416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.791 [2024-12-09 18:15:22.699516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.791 [2024-12-09 18:15:22.699541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.791 [2024-12-09 18:15:22.699562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.791 [2024-12-09 18:15:22.699574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.791 [2024-12-09 18:15:22.699602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.791 qpair failed and we were unable to recover it. 00:25:59.791 [2024-12-09 18:15:22.709411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.791 [2024-12-09 18:15:22.709504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.791 [2024-12-09 18:15:22.709532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.791 [2024-12-09 18:15:22.709558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.791 [2024-12-09 18:15:22.709573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.791 [2024-12-09 18:15:22.709602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.791 qpair failed and we were unable to recover it. 00:25:59.791 [2024-12-09 18:15:22.719518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.791 [2024-12-09 18:15:22.719672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.791 [2024-12-09 18:15:22.719705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.791 [2024-12-09 18:15:22.719729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.791 [2024-12-09 18:15:22.719750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.791 [2024-12-09 18:15:22.719792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.791 qpair failed and we were unable to recover it. 00:25:59.791 [2024-12-09 18:15:22.729521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.791 [2024-12-09 18:15:22.729625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.791 [2024-12-09 18:15:22.729652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.791 [2024-12-09 18:15:22.729666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.791 [2024-12-09 18:15:22.729678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.791 [2024-12-09 18:15:22.729708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.791 qpair failed and we were unable to recover it. 00:25:59.791 [2024-12-09 18:15:22.739508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.791 [2024-12-09 18:15:22.739601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.791 [2024-12-09 18:15:22.739628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.791 [2024-12-09 18:15:22.739642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.791 [2024-12-09 18:15:22.739654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.791 [2024-12-09 18:15:22.739682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.791 qpair failed and we were unable to recover it. 00:25:59.791 [2024-12-09 18:15:22.749530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.791 [2024-12-09 18:15:22.749624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.791 [2024-12-09 18:15:22.749650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.791 [2024-12-09 18:15:22.749664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.791 [2024-12-09 18:15:22.749676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.791 [2024-12-09 18:15:22.749704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.791 qpair failed and we were unable to recover it. 00:25:59.791 [2024-12-09 18:15:22.759576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.791 [2024-12-09 18:15:22.759670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.792 [2024-12-09 18:15:22.759695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.792 [2024-12-09 18:15:22.759714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.792 [2024-12-09 18:15:22.759727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.792 [2024-12-09 18:15:22.759755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.792 qpair failed and we were unable to recover it. 00:25:59.792 [2024-12-09 18:15:22.769627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.792 [2024-12-09 18:15:22.769738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.792 [2024-12-09 18:15:22.769764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.792 [2024-12-09 18:15:22.769778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.792 [2024-12-09 18:15:22.769789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.792 [2024-12-09 18:15:22.769817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.792 qpair failed and we were unable to recover it. 00:25:59.792 [2024-12-09 18:15:22.779707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.792 [2024-12-09 18:15:22.779802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.792 [2024-12-09 18:15:22.779828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.792 [2024-12-09 18:15:22.779841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.792 [2024-12-09 18:15:22.779853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.792 [2024-12-09 18:15:22.779881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.792 qpair failed and we were unable to recover it. 00:25:59.792 [2024-12-09 18:15:22.789652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.792 [2024-12-09 18:15:22.789747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.792 [2024-12-09 18:15:22.789772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.792 [2024-12-09 18:15:22.789785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.792 [2024-12-09 18:15:22.789798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.792 [2024-12-09 18:15:22.789825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.792 qpair failed and we were unable to recover it. 00:25:59.792 [2024-12-09 18:15:22.799679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.792 [2024-12-09 18:15:22.799773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.792 [2024-12-09 18:15:22.799799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.792 [2024-12-09 18:15:22.799813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.792 [2024-12-09 18:15:22.799825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.792 [2024-12-09 18:15:22.799853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.792 qpair failed and we were unable to recover it. 00:25:59.792 [2024-12-09 18:15:22.809721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.792 [2024-12-09 18:15:22.809812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.792 [2024-12-09 18:15:22.809837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.792 [2024-12-09 18:15:22.809851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.792 [2024-12-09 18:15:22.809863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.792 [2024-12-09 18:15:22.809891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.792 qpair failed and we were unable to recover it. 00:25:59.792 [2024-12-09 18:15:22.819795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.792 [2024-12-09 18:15:22.819889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.792 [2024-12-09 18:15:22.819919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.792 [2024-12-09 18:15:22.819933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.792 [2024-12-09 18:15:22.819944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:25:59.792 [2024-12-09 18:15:22.819972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.792 qpair failed and we were unable to recover it. 00:26:00.053 [2024-12-09 18:15:22.829795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.053 [2024-12-09 18:15:22.829886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.053 [2024-12-09 18:15:22.829914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.053 [2024-12-09 18:15:22.829929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.053 [2024-12-09 18:15:22.829941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.053 [2024-12-09 18:15:22.829970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.053 qpair failed and we were unable to recover it. 00:26:00.053 [2024-12-09 18:15:22.839783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.053 [2024-12-09 18:15:22.839883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.053 [2024-12-09 18:15:22.839910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.053 [2024-12-09 18:15:22.839924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.053 [2024-12-09 18:15:22.839936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.053 [2024-12-09 18:15:22.839964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.053 qpair failed and we were unable to recover it. 00:26:00.053 [2024-12-09 18:15:22.849891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.053 [2024-12-09 18:15:22.849990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.053 [2024-12-09 18:15:22.850016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.053 [2024-12-09 18:15:22.850031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.054 [2024-12-09 18:15:22.850042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.054 [2024-12-09 18:15:22.850070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.054 qpair failed and we were unable to recover it. 00:26:00.054 [2024-12-09 18:15:22.859988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.054 [2024-12-09 18:15:22.860088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.054 [2024-12-09 18:15:22.860114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.054 [2024-12-09 18:15:22.860128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.054 [2024-12-09 18:15:22.860139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.054 [2024-12-09 18:15:22.860167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.054 qpair failed and we were unable to recover it. 00:26:00.054 [2024-12-09 18:15:22.869882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.054 [2024-12-09 18:15:22.869971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.054 [2024-12-09 18:15:22.869996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.054 [2024-12-09 18:15:22.870010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.054 [2024-12-09 18:15:22.870022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.054 [2024-12-09 18:15:22.870051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.054 qpair failed and we were unable to recover it. 00:26:00.054 [2024-12-09 18:15:22.879915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.054 [2024-12-09 18:15:22.879993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.054 [2024-12-09 18:15:22.880018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.054 [2024-12-09 18:15:22.880032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.054 [2024-12-09 18:15:22.880044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.054 [2024-12-09 18:15:22.880072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.054 qpair failed and we were unable to recover it. 00:26:00.054 [2024-12-09 18:15:22.889995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.054 [2024-12-09 18:15:22.890099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.054 [2024-12-09 18:15:22.890124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.054 [2024-12-09 18:15:22.890144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.054 [2024-12-09 18:15:22.890156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.054 [2024-12-09 18:15:22.890184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.054 qpair failed and we were unable to recover it. 00:26:00.054 [2024-12-09 18:15:22.899993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.054 [2024-12-09 18:15:22.900077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.054 [2024-12-09 18:15:22.900102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.054 [2024-12-09 18:15:22.900116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.054 [2024-12-09 18:15:22.900127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.054 [2024-12-09 18:15:22.900155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.054 qpair failed and we were unable to recover it. 00:26:00.054 [2024-12-09 18:15:22.910008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.054 [2024-12-09 18:15:22.910120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.054 [2024-12-09 18:15:22.910145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.054 [2024-12-09 18:15:22.910159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.054 [2024-12-09 18:15:22.910170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.054 [2024-12-09 18:15:22.910198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.054 qpair failed and we were unable to recover it. 00:26:00.054 [2024-12-09 18:15:22.920117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.054 [2024-12-09 18:15:22.920206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.054 [2024-12-09 18:15:22.920232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.054 [2024-12-09 18:15:22.920246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.054 [2024-12-09 18:15:22.920258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.054 [2024-12-09 18:15:22.920286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.054 qpair failed and we were unable to recover it. 00:26:00.054 [2024-12-09 18:15:22.930053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.054 [2024-12-09 18:15:22.930143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.054 [2024-12-09 18:15:22.930169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.054 [2024-12-09 18:15:22.930182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.054 [2024-12-09 18:15:22.930195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.054 [2024-12-09 18:15:22.930229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.054 qpair failed and we were unable to recover it. 00:26:00.054 [2024-12-09 18:15:22.940108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.054 [2024-12-09 18:15:22.940192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.054 [2024-12-09 18:15:22.940217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.054 [2024-12-09 18:15:22.940231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.054 [2024-12-09 18:15:22.940242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.054 [2024-12-09 18:15:22.940270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.054 qpair failed and we were unable to recover it. 00:26:00.054 [2024-12-09 18:15:22.950106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.054 [2024-12-09 18:15:22.950201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.054 [2024-12-09 18:15:22.950226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.054 [2024-12-09 18:15:22.950240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.054 [2024-12-09 18:15:22.950252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.054 [2024-12-09 18:15:22.950279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.054 qpair failed and we were unable to recover it. 00:26:00.054 [2024-12-09 18:15:22.960158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.054 [2024-12-09 18:15:22.960248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.054 [2024-12-09 18:15:22.960274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.054 [2024-12-09 18:15:22.960288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.054 [2024-12-09 18:15:22.960300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.054 [2024-12-09 18:15:22.960327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.054 qpair failed and we were unable to recover it. 00:26:00.054 [2024-12-09 18:15:22.970160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.054 [2024-12-09 18:15:22.970251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.054 [2024-12-09 18:15:22.970276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.054 [2024-12-09 18:15:22.970290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.054 [2024-12-09 18:15:22.970301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.054 [2024-12-09 18:15:22.970329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.054 qpair failed and we were unable to recover it. 00:26:00.054 [2024-12-09 18:15:22.980209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.054 [2024-12-09 18:15:22.980298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.054 [2024-12-09 18:15:22.980326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.054 [2024-12-09 18:15:22.980341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.054 [2024-12-09 18:15:22.980353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.054 [2024-12-09 18:15:22.980382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.054 qpair failed and we were unable to recover it. 00:26:00.054 [2024-12-09 18:15:22.990266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.055 [2024-12-09 18:15:22.990360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.055 [2024-12-09 18:15:22.990385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.055 [2024-12-09 18:15:22.990399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.055 [2024-12-09 18:15:22.990411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.055 [2024-12-09 18:15:22.990440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.055 qpair failed and we were unable to recover it. 00:26:00.055 [2024-12-09 18:15:23.000267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.055 [2024-12-09 18:15:23.000351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.055 [2024-12-09 18:15:23.000377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.055 [2024-12-09 18:15:23.000391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.055 [2024-12-09 18:15:23.000403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.055 [2024-12-09 18:15:23.000431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.055 qpair failed and we were unable to recover it. 00:26:00.055 [2024-12-09 18:15:23.010293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.055 [2024-12-09 18:15:23.010426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.055 [2024-12-09 18:15:23.010452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.055 [2024-12-09 18:15:23.010466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.055 [2024-12-09 18:15:23.010478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.055 [2024-12-09 18:15:23.010505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.055 qpair failed and we were unable to recover it. 00:26:00.055 [2024-12-09 18:15:23.020322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.055 [2024-12-09 18:15:23.020406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.055 [2024-12-09 18:15:23.020431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.055 [2024-12-09 18:15:23.020451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.055 [2024-12-09 18:15:23.020464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.055 [2024-12-09 18:15:23.020492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.055 qpair failed and we were unable to recover it. 00:26:00.055 [2024-12-09 18:15:23.030339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.055 [2024-12-09 18:15:23.030468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.055 [2024-12-09 18:15:23.030494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.055 [2024-12-09 18:15:23.030508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.055 [2024-12-09 18:15:23.030520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.055 [2024-12-09 18:15:23.030556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.055 qpair failed and we were unable to recover it. 00:26:00.055 [2024-12-09 18:15:23.040394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.055 [2024-12-09 18:15:23.040478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.055 [2024-12-09 18:15:23.040504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.055 [2024-12-09 18:15:23.040517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.055 [2024-12-09 18:15:23.040529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.055 [2024-12-09 18:15:23.040564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.055 qpair failed and we were unable to recover it. 00:26:00.055 [2024-12-09 18:15:23.050440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.055 [2024-12-09 18:15:23.050533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.055 [2024-12-09 18:15:23.050568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.055 [2024-12-09 18:15:23.050582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.055 [2024-12-09 18:15:23.050594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.055 [2024-12-09 18:15:23.050623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.055 qpair failed and we were unable to recover it. 00:26:00.055 [2024-12-09 18:15:23.060418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.055 [2024-12-09 18:15:23.060503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.055 [2024-12-09 18:15:23.060527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.055 [2024-12-09 18:15:23.060541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.055 [2024-12-09 18:15:23.060561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.055 [2024-12-09 18:15:23.060596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.055 qpair failed and we were unable to recover it. 00:26:00.055 [2024-12-09 18:15:23.070452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.055 [2024-12-09 18:15:23.070531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.055 [2024-12-09 18:15:23.070563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.055 [2024-12-09 18:15:23.070578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.055 [2024-12-09 18:15:23.070590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.055 [2024-12-09 18:15:23.070618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.055 qpair failed and we were unable to recover it. 00:26:00.055 [2024-12-09 18:15:23.080505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.055 [2024-12-09 18:15:23.080623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.055 [2024-12-09 18:15:23.080649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.055 [2024-12-09 18:15:23.080663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.055 [2024-12-09 18:15:23.080675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.055 [2024-12-09 18:15:23.080704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.055 qpair failed and we were unable to recover it. 00:26:00.055 [2024-12-09 18:15:23.090560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.055 [2024-12-09 18:15:23.090655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.055 [2024-12-09 18:15:23.090681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.055 [2024-12-09 18:15:23.090695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.055 [2024-12-09 18:15:23.090707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.055 [2024-12-09 18:15:23.090735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.055 qpair failed and we were unable to recover it. 00:26:00.315 [2024-12-09 18:15:23.100572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.315 [2024-12-09 18:15:23.100664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.315 [2024-12-09 18:15:23.100694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.315 [2024-12-09 18:15:23.100710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.315 [2024-12-09 18:15:23.100723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.315 [2024-12-09 18:15:23.100752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.315 qpair failed and we were unable to recover it. 00:26:00.315 [2024-12-09 18:15:23.110645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.316 [2024-12-09 18:15:23.110768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.316 [2024-12-09 18:15:23.110794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.316 [2024-12-09 18:15:23.110808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.316 [2024-12-09 18:15:23.110820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.316 [2024-12-09 18:15:23.110848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.316 qpair failed and we were unable to recover it. 00:26:00.316 [2024-12-09 18:15:23.120614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.316 [2024-12-09 18:15:23.120701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.316 [2024-12-09 18:15:23.120727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.316 [2024-12-09 18:15:23.120741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.316 [2024-12-09 18:15:23.120753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.316 [2024-12-09 18:15:23.120781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.316 qpair failed and we were unable to recover it. 00:26:00.316 [2024-12-09 18:15:23.130655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.316 [2024-12-09 18:15:23.130740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.316 [2024-12-09 18:15:23.130765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.316 [2024-12-09 18:15:23.130779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.316 [2024-12-09 18:15:23.130791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.316 [2024-12-09 18:15:23.130819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.316 qpair failed and we were unable to recover it. 00:26:00.316 [2024-12-09 18:15:23.140687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.316 [2024-12-09 18:15:23.140776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.316 [2024-12-09 18:15:23.140801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.316 [2024-12-09 18:15:23.140815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.316 [2024-12-09 18:15:23.140826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.316 [2024-12-09 18:15:23.140854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.316 qpair failed and we were unable to recover it. 00:26:00.316 [2024-12-09 18:15:23.150698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.316 [2024-12-09 18:15:23.150781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.316 [2024-12-09 18:15:23.150806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.316 [2024-12-09 18:15:23.150826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.316 [2024-12-09 18:15:23.150838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.316 [2024-12-09 18:15:23.150866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.316 qpair failed and we were unable to recover it. 00:26:00.316 [2024-12-09 18:15:23.160818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.316 [2024-12-09 18:15:23.160897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.316 [2024-12-09 18:15:23.160924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.316 [2024-12-09 18:15:23.160939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.316 [2024-12-09 18:15:23.160950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.316 [2024-12-09 18:15:23.160978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.316 qpair failed and we were unable to recover it. 00:26:00.316 [2024-12-09 18:15:23.170792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.316 [2024-12-09 18:15:23.170881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.316 [2024-12-09 18:15:23.170906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.316 [2024-12-09 18:15:23.170920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.316 [2024-12-09 18:15:23.170932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.316 [2024-12-09 18:15:23.170960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.316 qpair failed and we were unable to recover it. 00:26:00.316 [2024-12-09 18:15:23.180779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.316 [2024-12-09 18:15:23.180864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.316 [2024-12-09 18:15:23.180889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.316 [2024-12-09 18:15:23.180902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.316 [2024-12-09 18:15:23.180914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.316 [2024-12-09 18:15:23.180942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.316 qpair failed and we were unable to recover it. 00:26:00.316 [2024-12-09 18:15:23.190797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.316 [2024-12-09 18:15:23.190877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.316 [2024-12-09 18:15:23.190902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.316 [2024-12-09 18:15:23.190916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.316 [2024-12-09 18:15:23.190927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.316 [2024-12-09 18:15:23.190960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.316 qpair failed and we were unable to recover it. 00:26:00.316 [2024-12-09 18:15:23.200841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.316 [2024-12-09 18:15:23.200920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.316 [2024-12-09 18:15:23.200946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.316 [2024-12-09 18:15:23.200959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.316 [2024-12-09 18:15:23.200972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.316 [2024-12-09 18:15:23.200999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.316 qpair failed and we were unable to recover it. 00:26:00.316 [2024-12-09 18:15:23.210890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.316 [2024-12-09 18:15:23.211000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.316 [2024-12-09 18:15:23.211025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.316 [2024-12-09 18:15:23.211039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.316 [2024-12-09 18:15:23.211051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.316 [2024-12-09 18:15:23.211079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.316 qpair failed and we were unable to recover it. 00:26:00.316 [2024-12-09 18:15:23.220960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.316 [2024-12-09 18:15:23.221089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.316 [2024-12-09 18:15:23.221122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.316 [2024-12-09 18:15:23.221145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.316 [2024-12-09 18:15:23.221166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.316 [2024-12-09 18:15:23.221208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.316 qpair failed and we were unable to recover it. 00:26:00.316 [2024-12-09 18:15:23.230923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.316 [2024-12-09 18:15:23.231006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.316 [2024-12-09 18:15:23.231034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.316 [2024-12-09 18:15:23.231049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.316 [2024-12-09 18:15:23.231061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.316 [2024-12-09 18:15:23.231090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.316 qpair failed and we were unable to recover it. 00:26:00.316 [2024-12-09 18:15:23.240958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.316 [2024-12-09 18:15:23.241053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.316 [2024-12-09 18:15:23.241079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.317 [2024-12-09 18:15:23.241093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.317 [2024-12-09 18:15:23.241105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.317 [2024-12-09 18:15:23.241133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.317 qpair failed and we were unable to recover it. 00:26:00.317 [2024-12-09 18:15:23.250983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.317 [2024-12-09 18:15:23.251072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.317 [2024-12-09 18:15:23.251097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.317 [2024-12-09 18:15:23.251111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.317 [2024-12-09 18:15:23.251123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.317 [2024-12-09 18:15:23.251151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.317 qpair failed and we were unable to recover it. 00:26:00.317 [2024-12-09 18:15:23.260999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.317 [2024-12-09 18:15:23.261098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.317 [2024-12-09 18:15:23.261123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.317 [2024-12-09 18:15:23.261137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.317 [2024-12-09 18:15:23.261148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.317 [2024-12-09 18:15:23.261175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.317 qpair failed and we were unable to recover it. 00:26:00.317 [2024-12-09 18:15:23.271065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.317 [2024-12-09 18:15:23.271184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.317 [2024-12-09 18:15:23.271210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.317 [2024-12-09 18:15:23.271224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.317 [2024-12-09 18:15:23.271236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.317 [2024-12-09 18:15:23.271264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.317 qpair failed and we were unable to recover it. 00:26:00.317 [2024-12-09 18:15:23.281071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.317 [2024-12-09 18:15:23.281157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.317 [2024-12-09 18:15:23.281186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.317 [2024-12-09 18:15:23.281208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.317 [2024-12-09 18:15:23.281222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.317 [2024-12-09 18:15:23.281251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.317 qpair failed and we were unable to recover it. 00:26:00.317 [2024-12-09 18:15:23.291097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.317 [2024-12-09 18:15:23.291186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.317 [2024-12-09 18:15:23.291212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.317 [2024-12-09 18:15:23.291226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.317 [2024-12-09 18:15:23.291238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.317 [2024-12-09 18:15:23.291267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.317 qpair failed and we were unable to recover it. 00:26:00.317 [2024-12-09 18:15:23.301118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.317 [2024-12-09 18:15:23.301220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.317 [2024-12-09 18:15:23.301245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.317 [2024-12-09 18:15:23.301259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.317 [2024-12-09 18:15:23.301271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.317 [2024-12-09 18:15:23.301299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.317 qpair failed and we were unable to recover it. 00:26:00.317 [2024-12-09 18:15:23.311150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.317 [2024-12-09 18:15:23.311238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.317 [2024-12-09 18:15:23.311263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.317 [2024-12-09 18:15:23.311277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.317 [2024-12-09 18:15:23.311289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.317 [2024-12-09 18:15:23.311317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.317 qpair failed and we were unable to recover it. 00:26:00.317 [2024-12-09 18:15:23.321255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.317 [2024-12-09 18:15:23.321336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.317 [2024-12-09 18:15:23.321361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.317 [2024-12-09 18:15:23.321375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.317 [2024-12-09 18:15:23.321387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.317 [2024-12-09 18:15:23.321420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.317 qpair failed and we were unable to recover it. 00:26:00.317 [2024-12-09 18:15:23.331217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.317 [2024-12-09 18:15:23.331309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.317 [2024-12-09 18:15:23.331334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.317 [2024-12-09 18:15:23.331348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.317 [2024-12-09 18:15:23.331360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.317 [2024-12-09 18:15:23.331387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.317 qpair failed and we were unable to recover it. 00:26:00.317 [2024-12-09 18:15:23.341307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.317 [2024-12-09 18:15:23.341393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.317 [2024-12-09 18:15:23.341419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.317 [2024-12-09 18:15:23.341433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.317 [2024-12-09 18:15:23.341445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.317 [2024-12-09 18:15:23.341473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.317 qpair failed and we were unable to recover it. 00:26:00.317 [2024-12-09 18:15:23.351257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.317 [2024-12-09 18:15:23.351344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.317 [2024-12-09 18:15:23.351370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.317 [2024-12-09 18:15:23.351392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.317 [2024-12-09 18:15:23.351408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.317 [2024-12-09 18:15:23.351436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.317 qpair failed and we were unable to recover it. 00:26:00.577 [2024-12-09 18:15:23.361377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.577 [2024-12-09 18:15:23.361463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.577 [2024-12-09 18:15:23.361489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.577 [2024-12-09 18:15:23.361503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.577 [2024-12-09 18:15:23.361516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.577 [2024-12-09 18:15:23.361550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.577 qpair failed and we were unable to recover it. 00:26:00.577 [2024-12-09 18:15:23.371368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.577 [2024-12-09 18:15:23.371462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.577 [2024-12-09 18:15:23.371487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.577 [2024-12-09 18:15:23.371501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.577 [2024-12-09 18:15:23.371513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.577 [2024-12-09 18:15:23.371541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.577 qpair failed and we were unable to recover it. 00:26:00.577 [2024-12-09 18:15:23.381369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.577 [2024-12-09 18:15:23.381453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.577 [2024-12-09 18:15:23.381478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.577 [2024-12-09 18:15:23.381492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.577 [2024-12-09 18:15:23.381504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.577 [2024-12-09 18:15:23.381532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.577 qpair failed and we were unable to recover it. 00:26:00.577 [2024-12-09 18:15:23.391354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.577 [2024-12-09 18:15:23.391439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.577 [2024-12-09 18:15:23.391465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.577 [2024-12-09 18:15:23.391479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.577 [2024-12-09 18:15:23.391491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.577 [2024-12-09 18:15:23.391518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.577 qpair failed and we were unable to recover it. 00:26:00.577 [2024-12-09 18:15:23.401391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.577 [2024-12-09 18:15:23.401480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.577 [2024-12-09 18:15:23.401506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.577 [2024-12-09 18:15:23.401520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.577 [2024-12-09 18:15:23.401532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.577 [2024-12-09 18:15:23.401567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.577 qpair failed and we were unable to recover it. 00:26:00.577 [2024-12-09 18:15:23.411519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.577 [2024-12-09 18:15:23.411619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.577 [2024-12-09 18:15:23.411648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.577 [2024-12-09 18:15:23.411674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.577 [2024-12-09 18:15:23.411687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.577 [2024-12-09 18:15:23.411717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.577 qpair failed and we were unable to recover it. 00:26:00.577 [2024-12-09 18:15:23.421455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.577 [2024-12-09 18:15:23.421555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.577 [2024-12-09 18:15:23.421583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.577 [2024-12-09 18:15:23.421598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.577 [2024-12-09 18:15:23.421609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.577 [2024-12-09 18:15:23.421638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.577 qpair failed and we were unable to recover it. 00:26:00.577 [2024-12-09 18:15:23.431471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.577 [2024-12-09 18:15:23.431564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.577 [2024-12-09 18:15:23.431590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.577 [2024-12-09 18:15:23.431604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.577 [2024-12-09 18:15:23.431617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.577 [2024-12-09 18:15:23.431646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.577 qpair failed and we were unable to recover it. 00:26:00.577 [2024-12-09 18:15:23.441536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.577 [2024-12-09 18:15:23.441655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.577 [2024-12-09 18:15:23.441680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.577 [2024-12-09 18:15:23.441694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.577 [2024-12-09 18:15:23.441706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.577 [2024-12-09 18:15:23.441735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.577 qpair failed and we were unable to recover it. 00:26:00.577 [2024-12-09 18:15:23.451541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.577 [2024-12-09 18:15:23.451642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.577 [2024-12-09 18:15:23.451667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.577 [2024-12-09 18:15:23.451680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.578 [2024-12-09 18:15:23.451692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.578 [2024-12-09 18:15:23.451726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.578 qpair failed and we were unable to recover it. 00:26:00.578 [2024-12-09 18:15:23.461611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.578 [2024-12-09 18:15:23.461707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.578 [2024-12-09 18:15:23.461732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.578 [2024-12-09 18:15:23.461746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.578 [2024-12-09 18:15:23.461758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.578 [2024-12-09 18:15:23.461786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.578 qpair failed and we were unable to recover it. 00:26:00.578 [2024-12-09 18:15:23.471579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.578 [2024-12-09 18:15:23.471693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.578 [2024-12-09 18:15:23.471719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.578 [2024-12-09 18:15:23.471733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.578 [2024-12-09 18:15:23.471745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.578 [2024-12-09 18:15:23.471773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.578 qpair failed and we were unable to recover it. 00:26:00.578 [2024-12-09 18:15:23.481662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.578 [2024-12-09 18:15:23.481765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.578 [2024-12-09 18:15:23.481794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.578 [2024-12-09 18:15:23.481809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.578 [2024-12-09 18:15:23.481821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.578 [2024-12-09 18:15:23.481849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.578 qpair failed and we were unable to recover it. 00:26:00.578 [2024-12-09 18:15:23.491677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.578 [2024-12-09 18:15:23.491772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.578 [2024-12-09 18:15:23.491798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.578 [2024-12-09 18:15:23.491812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.578 [2024-12-09 18:15:23.491824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.578 [2024-12-09 18:15:23.491852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.578 qpair failed and we were unable to recover it. 00:26:00.578 [2024-12-09 18:15:23.501688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.578 [2024-12-09 18:15:23.501777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.578 [2024-12-09 18:15:23.501803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.578 [2024-12-09 18:15:23.501817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.578 [2024-12-09 18:15:23.501829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.578 [2024-12-09 18:15:23.501857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.578 qpair failed and we were unable to recover it. 00:26:00.578 [2024-12-09 18:15:23.511692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.578 [2024-12-09 18:15:23.511775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.578 [2024-12-09 18:15:23.511800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.578 [2024-12-09 18:15:23.511814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.578 [2024-12-09 18:15:23.511825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.578 [2024-12-09 18:15:23.511854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.578 qpair failed and we were unable to recover it. 00:26:00.578 [2024-12-09 18:15:23.521725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.578 [2024-12-09 18:15:23.521808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.578 [2024-12-09 18:15:23.521834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.578 [2024-12-09 18:15:23.521848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.578 [2024-12-09 18:15:23.521859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.578 [2024-12-09 18:15:23.521888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.578 qpair failed and we were unable to recover it. 00:26:00.578 [2024-12-09 18:15:23.531783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.578 [2024-12-09 18:15:23.531874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.578 [2024-12-09 18:15:23.531899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.578 [2024-12-09 18:15:23.531913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.578 [2024-12-09 18:15:23.531925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.578 [2024-12-09 18:15:23.531953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.578 qpair failed and we were unable to recover it. 00:26:00.578 [2024-12-09 18:15:23.541800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.578 [2024-12-09 18:15:23.541906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.578 [2024-12-09 18:15:23.541935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.578 [2024-12-09 18:15:23.541959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.578 [2024-12-09 18:15:23.541972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.578 [2024-12-09 18:15:23.542000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.578 qpair failed and we were unable to recover it. 00:26:00.578 [2024-12-09 18:15:23.551831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.578 [2024-12-09 18:15:23.551963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.578 [2024-12-09 18:15:23.551989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.578 [2024-12-09 18:15:23.552003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.578 [2024-12-09 18:15:23.552015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.578 [2024-12-09 18:15:23.552043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.578 qpair failed and we were unable to recover it. 00:26:00.578 [2024-12-09 18:15:23.561832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.578 [2024-12-09 18:15:23.561926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.578 [2024-12-09 18:15:23.561951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.578 [2024-12-09 18:15:23.561965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.578 [2024-12-09 18:15:23.561977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.578 [2024-12-09 18:15:23.562004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.578 qpair failed and we were unable to recover it. 00:26:00.578 [2024-12-09 18:15:23.571907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.578 [2024-12-09 18:15:23.571996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.578 [2024-12-09 18:15:23.572022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.578 [2024-12-09 18:15:23.572035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.578 [2024-12-09 18:15:23.572047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.578 [2024-12-09 18:15:23.572075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.578 qpair failed and we were unable to recover it. 00:26:00.578 [2024-12-09 18:15:23.581910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.578 [2024-12-09 18:15:23.581991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.578 [2024-12-09 18:15:23.582016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.578 [2024-12-09 18:15:23.582029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.578 [2024-12-09 18:15:23.582041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.578 [2024-12-09 18:15:23.582075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.578 qpair failed and we were unable to recover it. 00:26:00.578 [2024-12-09 18:15:23.591999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.579 [2024-12-09 18:15:23.592089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.579 [2024-12-09 18:15:23.592114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.579 [2024-12-09 18:15:23.592128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.579 [2024-12-09 18:15:23.592139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.579 [2024-12-09 18:15:23.592168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.579 qpair failed and we were unable to recover it. 00:26:00.579 [2024-12-09 18:15:23.602008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.579 [2024-12-09 18:15:23.602120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.579 [2024-12-09 18:15:23.602145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.579 [2024-12-09 18:15:23.602159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.579 [2024-12-09 18:15:23.602171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.579 [2024-12-09 18:15:23.602199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.579 qpair failed and we were unable to recover it. 00:26:00.579 [2024-12-09 18:15:23.612005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.579 [2024-12-09 18:15:23.612098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.579 [2024-12-09 18:15:23.612123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.579 [2024-12-09 18:15:23.612138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.579 [2024-12-09 18:15:23.612149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.579 [2024-12-09 18:15:23.612178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.579 qpair failed and we were unable to recover it. 00:26:00.838 [2024-12-09 18:15:23.622009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.838 [2024-12-09 18:15:23.622100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.838 [2024-12-09 18:15:23.622126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.838 [2024-12-09 18:15:23.622140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.838 [2024-12-09 18:15:23.622152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.838 [2024-12-09 18:15:23.622181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.838 qpair failed and we were unable to recover it. 00:26:00.838 [2024-12-09 18:15:23.632063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.838 [2024-12-09 18:15:23.632182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.838 [2024-12-09 18:15:23.632208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.838 [2024-12-09 18:15:23.632223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.838 [2024-12-09 18:15:23.632235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.838 [2024-12-09 18:15:23.632263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.838 qpair failed and we were unable to recover it. 00:26:00.838 [2024-12-09 18:15:23.642052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.838 [2024-12-09 18:15:23.642175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.838 [2024-12-09 18:15:23.642200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.838 [2024-12-09 18:15:23.642214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.838 [2024-12-09 18:15:23.642226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.838 [2024-12-09 18:15:23.642254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.838 qpair failed and we were unable to recover it. 00:26:00.838 [2024-12-09 18:15:23.652137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.838 [2024-12-09 18:15:23.652253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.838 [2024-12-09 18:15:23.652278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.838 [2024-12-09 18:15:23.652292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.838 [2024-12-09 18:15:23.652303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.838 [2024-12-09 18:15:23.652331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.838 qpair failed and we were unable to recover it. 00:26:00.838 [2024-12-09 18:15:23.662134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.838 [2024-12-09 18:15:23.662239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.838 [2024-12-09 18:15:23.662264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.838 [2024-12-09 18:15:23.662278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.838 [2024-12-09 18:15:23.662289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.838 [2024-12-09 18:15:23.662317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.838 qpair failed and we were unable to recover it. 00:26:00.838 [2024-12-09 18:15:23.672150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.838 [2024-12-09 18:15:23.672231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.838 [2024-12-09 18:15:23.672257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.838 [2024-12-09 18:15:23.672276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.838 [2024-12-09 18:15:23.672288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.838 [2024-12-09 18:15:23.672316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.838 qpair failed and we were unable to recover it. 00:26:00.838 [2024-12-09 18:15:23.682157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.838 [2024-12-09 18:15:23.682245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.838 [2024-12-09 18:15:23.682270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.838 [2024-12-09 18:15:23.682284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.838 [2024-12-09 18:15:23.682296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.838 [2024-12-09 18:15:23.682323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.838 qpair failed and we were unable to recover it. 00:26:00.838 [2024-12-09 18:15:23.692192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.839 [2024-12-09 18:15:23.692281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.839 [2024-12-09 18:15:23.692307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.839 [2024-12-09 18:15:23.692321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.839 [2024-12-09 18:15:23.692333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.839 [2024-12-09 18:15:23.692360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.839 qpair failed and we were unable to recover it. 00:26:00.839 [2024-12-09 18:15:23.702229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.839 [2024-12-09 18:15:23.702309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.839 [2024-12-09 18:15:23.702334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.839 [2024-12-09 18:15:23.702348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.839 [2024-12-09 18:15:23.702359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.839 [2024-12-09 18:15:23.702388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.839 qpair failed and we were unable to recover it. 00:26:00.839 [2024-12-09 18:15:23.712247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.839 [2024-12-09 18:15:23.712343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.839 [2024-12-09 18:15:23.712368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.839 [2024-12-09 18:15:23.712381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.839 [2024-12-09 18:15:23.712393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.839 [2024-12-09 18:15:23.712426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.839 qpair failed and we were unable to recover it. 00:26:00.839 [2024-12-09 18:15:23.722266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.839 [2024-12-09 18:15:23.722349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.839 [2024-12-09 18:15:23.722374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.839 [2024-12-09 18:15:23.722388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.839 [2024-12-09 18:15:23.722400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.839 [2024-12-09 18:15:23.722427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.839 qpair failed and we were unable to recover it. 00:26:00.839 [2024-12-09 18:15:23.732406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.839 [2024-12-09 18:15:23.732510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.839 [2024-12-09 18:15:23.732536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.839 [2024-12-09 18:15:23.732559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.839 [2024-12-09 18:15:23.732572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.839 [2024-12-09 18:15:23.732602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.839 qpair failed and we were unable to recover it. 00:26:00.839 [2024-12-09 18:15:23.742393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.839 [2024-12-09 18:15:23.742503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.839 [2024-12-09 18:15:23.742530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.839 [2024-12-09 18:15:23.742551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.839 [2024-12-09 18:15:23.742565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.839 [2024-12-09 18:15:23.742593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.839 qpair failed and we were unable to recover it. 00:26:00.839 [2024-12-09 18:15:23.752469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.839 [2024-12-09 18:15:23.752608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.839 [2024-12-09 18:15:23.752638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.839 [2024-12-09 18:15:23.752654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.839 [2024-12-09 18:15:23.752666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.839 [2024-12-09 18:15:23.752694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.839 qpair failed and we were unable to recover it. 00:26:00.839 [2024-12-09 18:15:23.762447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.839 [2024-12-09 18:15:23.762537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.839 [2024-12-09 18:15:23.762569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.839 [2024-12-09 18:15:23.762584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.839 [2024-12-09 18:15:23.762595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.839 [2024-12-09 18:15:23.762623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.839 qpair failed and we were unable to recover it. 00:26:00.839 [2024-12-09 18:15:23.772573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.839 [2024-12-09 18:15:23.772691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.839 [2024-12-09 18:15:23.772719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.839 [2024-12-09 18:15:23.772736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.839 [2024-12-09 18:15:23.772747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.839 [2024-12-09 18:15:23.772776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.839 qpair failed and we were unable to recover it. 00:26:00.839 [2024-12-09 18:15:23.782468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.839 [2024-12-09 18:15:23.782560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.839 [2024-12-09 18:15:23.782586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.839 [2024-12-09 18:15:23.782600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.839 [2024-12-09 18:15:23.782612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.839 [2024-12-09 18:15:23.782641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.839 qpair failed and we were unable to recover it. 00:26:00.839 [2024-12-09 18:15:23.792577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.839 [2024-12-09 18:15:23.792673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.839 [2024-12-09 18:15:23.792699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.839 [2024-12-09 18:15:23.792712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.839 [2024-12-09 18:15:23.792724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.839 [2024-12-09 18:15:23.792752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.839 qpair failed and we were unable to recover it. 00:26:00.839 [2024-12-09 18:15:23.802518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.839 [2024-12-09 18:15:23.802611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.839 [2024-12-09 18:15:23.802637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.839 [2024-12-09 18:15:23.802656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.839 [2024-12-09 18:15:23.802669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.839 [2024-12-09 18:15:23.802697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.839 qpair failed and we were unable to recover it. 00:26:00.839 [2024-12-09 18:15:23.812602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.839 [2024-12-09 18:15:23.812693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.839 [2024-12-09 18:15:23.812718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.839 [2024-12-09 18:15:23.812732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.839 [2024-12-09 18:15:23.812744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.839 [2024-12-09 18:15:23.812772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.839 qpair failed and we were unable to recover it. 00:26:00.839 [2024-12-09 18:15:23.822581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.839 [2024-12-09 18:15:23.822696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.839 [2024-12-09 18:15:23.822721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.839 [2024-12-09 18:15:23.822735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.840 [2024-12-09 18:15:23.822747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.840 [2024-12-09 18:15:23.822775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.840 qpair failed and we were unable to recover it. 00:26:00.840 [2024-12-09 18:15:23.832620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.840 [2024-12-09 18:15:23.832728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.840 [2024-12-09 18:15:23.832754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.840 [2024-12-09 18:15:23.832767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.840 [2024-12-09 18:15:23.832779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.840 [2024-12-09 18:15:23.832807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.840 qpair failed and we were unable to recover it. 00:26:00.840 [2024-12-09 18:15:23.842776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.840 [2024-12-09 18:15:23.842897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.840 [2024-12-09 18:15:23.842922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.840 [2024-12-09 18:15:23.842937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.840 [2024-12-09 18:15:23.842948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.840 [2024-12-09 18:15:23.842981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.840 qpair failed and we were unable to recover it. 00:26:00.840 [2024-12-09 18:15:23.852736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.840 [2024-12-09 18:15:23.852825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.840 [2024-12-09 18:15:23.852850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.840 [2024-12-09 18:15:23.852864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.840 [2024-12-09 18:15:23.852876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.840 [2024-12-09 18:15:23.852904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.840 qpair failed and we were unable to recover it. 00:26:00.840 [2024-12-09 18:15:23.862793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.840 [2024-12-09 18:15:23.862907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.840 [2024-12-09 18:15:23.862932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.840 [2024-12-09 18:15:23.862946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.840 [2024-12-09 18:15:23.862958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.840 [2024-12-09 18:15:23.862985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.840 qpair failed and we were unable to recover it. 00:26:00.840 [2024-12-09 18:15:23.872824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.840 [2024-12-09 18:15:23.872915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.840 [2024-12-09 18:15:23.872941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.840 [2024-12-09 18:15:23.872960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.840 [2024-12-09 18:15:23.872972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:00.840 [2024-12-09 18:15:23.873005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.840 qpair failed and we were unable to recover it. 00:26:01.102 [2024-12-09 18:15:23.882734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.102 [2024-12-09 18:15:23.882865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.102 [2024-12-09 18:15:23.882892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.102 [2024-12-09 18:15:23.882906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.102 [2024-12-09 18:15:23.882917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.102 [2024-12-09 18:15:23.882946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.102 qpair failed and we were unable to recover it. 00:26:01.102 [2024-12-09 18:15:23.892779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.102 [2024-12-09 18:15:23.892876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.102 [2024-12-09 18:15:23.892903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.102 [2024-12-09 18:15:23.892917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.102 [2024-12-09 18:15:23.892929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.102 [2024-12-09 18:15:23.892956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.102 qpair failed and we were unable to recover it. 00:26:01.102 [2024-12-09 18:15:23.902864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.102 [2024-12-09 18:15:23.902966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.102 [2024-12-09 18:15:23.902992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.102 [2024-12-09 18:15:23.903006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.102 [2024-12-09 18:15:23.903018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.102 [2024-12-09 18:15:23.903046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.102 qpair failed and we were unable to recover it. 00:26:01.102 [2024-12-09 18:15:23.912864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.102 [2024-12-09 18:15:23.912957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.102 [2024-12-09 18:15:23.912983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.102 [2024-12-09 18:15:23.912999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.102 [2024-12-09 18:15:23.913014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.102 [2024-12-09 18:15:23.913054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.102 qpair failed and we were unable to recover it. 00:26:01.102 [2024-12-09 18:15:23.922897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.102 [2024-12-09 18:15:23.923023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.102 [2024-12-09 18:15:23.923049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.102 [2024-12-09 18:15:23.923063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.102 [2024-12-09 18:15:23.923076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.102 [2024-12-09 18:15:23.923104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.102 qpair failed and we were unable to recover it. 00:26:01.102 [2024-12-09 18:15:23.932911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.102 [2024-12-09 18:15:23.933038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.102 [2024-12-09 18:15:23.933064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.102 [2024-12-09 18:15:23.933083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.102 [2024-12-09 18:15:23.933096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.102 [2024-12-09 18:15:23.933125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.102 qpair failed and we were unable to recover it. 00:26:01.102 [2024-12-09 18:15:23.943016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.102 [2024-12-09 18:15:23.943105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.102 [2024-12-09 18:15:23.943131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.102 [2024-12-09 18:15:23.943145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.102 [2024-12-09 18:15:23.943157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.102 [2024-12-09 18:15:23.943185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.102 qpair failed and we were unable to recover it. 00:26:01.102 [2024-12-09 18:15:23.952965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.102 [2024-12-09 18:15:23.953050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.102 [2024-12-09 18:15:23.953076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.102 [2024-12-09 18:15:23.953090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.102 [2024-12-09 18:15:23.953101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.102 [2024-12-09 18:15:23.953129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.102 qpair failed and we were unable to recover it. 00:26:01.102 [2024-12-09 18:15:23.963078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.102 [2024-12-09 18:15:23.963173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.102 [2024-12-09 18:15:23.963198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.102 [2024-12-09 18:15:23.963212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.102 [2024-12-09 18:15:23.963224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.102 [2024-12-09 18:15:23.963251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.102 qpair failed and we were unable to recover it. 00:26:01.102 [2024-12-09 18:15:23.973039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.102 [2024-12-09 18:15:23.973169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.102 [2024-12-09 18:15:23.973194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.102 [2024-12-09 18:15:23.973208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.102 [2024-12-09 18:15:23.973220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.102 [2024-12-09 18:15:23.973253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.102 qpair failed and we were unable to recover it. 00:26:01.102 [2024-12-09 18:15:23.983050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.102 [2024-12-09 18:15:23.983136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.102 [2024-12-09 18:15:23.983164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.102 [2024-12-09 18:15:23.983179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.102 [2024-12-09 18:15:23.983191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.102 [2024-12-09 18:15:23.983220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.102 qpair failed and we were unable to recover it. 00:26:01.102 [2024-12-09 18:15:23.993091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.102 [2024-12-09 18:15:23.993216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.102 [2024-12-09 18:15:23.993246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.102 [2024-12-09 18:15:23.993262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.102 [2024-12-09 18:15:23.993274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.102 [2024-12-09 18:15:23.993303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.102 qpair failed and we were unable to recover it. 00:26:01.102 [2024-12-09 18:15:24.003204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.102 [2024-12-09 18:15:24.003333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.102 [2024-12-09 18:15:24.003360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.102 [2024-12-09 18:15:24.003374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.102 [2024-12-09 18:15:24.003385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.102 [2024-12-09 18:15:24.003414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.102 qpair failed and we were unable to recover it. 00:26:01.102 [2024-12-09 18:15:24.013117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.103 [2024-12-09 18:15:24.013209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.103 [2024-12-09 18:15:24.013234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.103 [2024-12-09 18:15:24.013248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.103 [2024-12-09 18:15:24.013259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.103 [2024-12-09 18:15:24.013287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.103 qpair failed and we were unable to recover it. 00:26:01.103 [2024-12-09 18:15:24.023210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.103 [2024-12-09 18:15:24.023336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.103 [2024-12-09 18:15:24.023372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.103 [2024-12-09 18:15:24.023386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.103 [2024-12-09 18:15:24.023398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.103 [2024-12-09 18:15:24.023426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.103 qpair failed and we were unable to recover it. 00:26:01.103 [2024-12-09 18:15:24.033217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.103 [2024-12-09 18:15:24.033300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.103 [2024-12-09 18:15:24.033326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.103 [2024-12-09 18:15:24.033340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.103 [2024-12-09 18:15:24.033352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.103 [2024-12-09 18:15:24.033380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.103 qpair failed and we were unable to recover it. 00:26:01.103 [2024-12-09 18:15:24.043196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.103 [2024-12-09 18:15:24.043279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.103 [2024-12-09 18:15:24.043304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.103 [2024-12-09 18:15:24.043319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.103 [2024-12-09 18:15:24.043331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.103 [2024-12-09 18:15:24.043359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.103 qpair failed and we were unable to recover it. 00:26:01.103 [2024-12-09 18:15:24.053297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.103 [2024-12-09 18:15:24.053415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.103 [2024-12-09 18:15:24.053440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.103 [2024-12-09 18:15:24.053455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.103 [2024-12-09 18:15:24.053467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.103 [2024-12-09 18:15:24.053495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.103 qpair failed and we were unable to recover it. 00:26:01.103 [2024-12-09 18:15:24.063287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.103 [2024-12-09 18:15:24.063368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.103 [2024-12-09 18:15:24.063399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.103 [2024-12-09 18:15:24.063414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.103 [2024-12-09 18:15:24.063426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.103 [2024-12-09 18:15:24.063453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.103 qpair failed and we were unable to recover it. 00:26:01.103 [2024-12-09 18:15:24.073293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.103 [2024-12-09 18:15:24.073379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.103 [2024-12-09 18:15:24.073404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.103 [2024-12-09 18:15:24.073418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.103 [2024-12-09 18:15:24.073430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.103 [2024-12-09 18:15:24.073457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.103 qpair failed and we were unable to recover it. 00:26:01.103 [2024-12-09 18:15:24.083322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.103 [2024-12-09 18:15:24.083412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.103 [2024-12-09 18:15:24.083437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.103 [2024-12-09 18:15:24.083451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.103 [2024-12-09 18:15:24.083463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.103 [2024-12-09 18:15:24.083491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.103 qpair failed and we were unable to recover it. 00:26:01.103 [2024-12-09 18:15:24.093414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.103 [2024-12-09 18:15:24.093511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.103 [2024-12-09 18:15:24.093536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.103 [2024-12-09 18:15:24.093557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.103 [2024-12-09 18:15:24.093570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.103 [2024-12-09 18:15:24.093598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.103 qpair failed and we were unable to recover it. 00:26:01.103 [2024-12-09 18:15:24.103369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.103 [2024-12-09 18:15:24.103455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.103 [2024-12-09 18:15:24.103480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.103 [2024-12-09 18:15:24.103495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.103 [2024-12-09 18:15:24.103506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.103 [2024-12-09 18:15:24.103539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.103 qpair failed and we were unable to recover it. 00:26:01.103 [2024-12-09 18:15:24.113405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.103 [2024-12-09 18:15:24.113512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.103 [2024-12-09 18:15:24.113540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.103 [2024-12-09 18:15:24.113567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.103 [2024-12-09 18:15:24.113579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.103 [2024-12-09 18:15:24.113609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.103 qpair failed and we were unable to recover it. 00:26:01.103 [2024-12-09 18:15:24.123477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.103 [2024-12-09 18:15:24.123573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.103 [2024-12-09 18:15:24.123600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.103 [2024-12-09 18:15:24.123614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.103 [2024-12-09 18:15:24.123626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.103 [2024-12-09 18:15:24.123655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.103 qpair failed and we were unable to recover it. 00:26:01.103 [2024-12-09 18:15:24.133479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.103 [2024-12-09 18:15:24.133579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.103 [2024-12-09 18:15:24.133608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.103 [2024-12-09 18:15:24.133624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.103 [2024-12-09 18:15:24.133636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.103 [2024-12-09 18:15:24.133665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.103 qpair failed and we were unable to recover it. 00:26:01.365 [2024-12-09 18:15:24.143507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.365 [2024-12-09 18:15:24.143604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.365 [2024-12-09 18:15:24.143631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.365 [2024-12-09 18:15:24.143646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.365 [2024-12-09 18:15:24.143657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:01.365 [2024-12-09 18:15:24.143686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.365 qpair failed and we were unable to recover it. 00:26:01.365 [2024-12-09 18:15:24.153555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.365 [2024-12-09 18:15:24.153662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.365 [2024-12-09 18:15:24.153694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.365 [2024-12-09 18:15:24.153710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.365 [2024-12-09 18:15:24.153722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.365 [2024-12-09 18:15:24.153753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.365 qpair failed and we were unable to recover it. 00:26:01.365 [2024-12-09 18:15:24.163538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.365 [2024-12-09 18:15:24.163628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.365 [2024-12-09 18:15:24.163654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.365 [2024-12-09 18:15:24.163668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.365 [2024-12-09 18:15:24.163679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.365 [2024-12-09 18:15:24.163709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.365 qpair failed and we were unable to recover it. 00:26:01.365 [2024-12-09 18:15:24.173587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.365 [2024-12-09 18:15:24.173698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.365 [2024-12-09 18:15:24.173724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.365 [2024-12-09 18:15:24.173739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.365 [2024-12-09 18:15:24.173750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.365 [2024-12-09 18:15:24.173780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.365 qpair failed and we were unable to recover it. 00:26:01.365 [2024-12-09 18:15:24.183590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.365 [2024-12-09 18:15:24.183710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.365 [2024-12-09 18:15:24.183736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.365 [2024-12-09 18:15:24.183754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.365 [2024-12-09 18:15:24.183767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.365 [2024-12-09 18:15:24.183797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.365 qpair failed and we were unable to recover it. 00:26:01.365 [2024-12-09 18:15:24.193651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.365 [2024-12-09 18:15:24.193736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.365 [2024-12-09 18:15:24.193768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.365 [2024-12-09 18:15:24.193784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.365 [2024-12-09 18:15:24.193796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.365 [2024-12-09 18:15:24.193826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.366 qpair failed and we were unable to recover it. 00:26:01.366 [2024-12-09 18:15:24.203685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.366 [2024-12-09 18:15:24.203797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.366 [2024-12-09 18:15:24.203824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.366 [2024-12-09 18:15:24.203838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.366 [2024-12-09 18:15:24.203850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.366 [2024-12-09 18:15:24.203880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.366 qpair failed and we were unable to recover it. 00:26:01.366 [2024-12-09 18:15:24.213698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.366 [2024-12-09 18:15:24.213790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.366 [2024-12-09 18:15:24.213817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.366 [2024-12-09 18:15:24.213831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.366 [2024-12-09 18:15:24.213843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.366 [2024-12-09 18:15:24.213873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.366 qpair failed and we were unable to recover it. 00:26:01.366 [2024-12-09 18:15:24.223748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.366 [2024-12-09 18:15:24.223883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.366 [2024-12-09 18:15:24.223910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.366 [2024-12-09 18:15:24.223924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.366 [2024-12-09 18:15:24.223936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.366 [2024-12-09 18:15:24.223965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.366 qpair failed and we were unable to recover it. 00:26:01.366 [2024-12-09 18:15:24.233739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.366 [2024-12-09 18:15:24.233829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.366 [2024-12-09 18:15:24.233854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.366 [2024-12-09 18:15:24.233869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.366 [2024-12-09 18:15:24.233886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.366 [2024-12-09 18:15:24.233917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.366 qpair failed and we were unable to recover it. 00:26:01.366 [2024-12-09 18:15:24.243778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.366 [2024-12-09 18:15:24.243863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.366 [2024-12-09 18:15:24.243887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.366 [2024-12-09 18:15:24.243902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.366 [2024-12-09 18:15:24.243914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.366 [2024-12-09 18:15:24.243943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.366 qpair failed and we were unable to recover it. 00:26:01.366 [2024-12-09 18:15:24.253908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.366 [2024-12-09 18:15:24.254040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.366 [2024-12-09 18:15:24.254066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.366 [2024-12-09 18:15:24.254080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.366 [2024-12-09 18:15:24.254092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.366 [2024-12-09 18:15:24.254121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.366 qpair failed and we were unable to recover it. 00:26:01.366 [2024-12-09 18:15:24.263861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.366 [2024-12-09 18:15:24.263958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.366 [2024-12-09 18:15:24.263983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.366 [2024-12-09 18:15:24.263997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.366 [2024-12-09 18:15:24.264009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.366 [2024-12-09 18:15:24.264038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.366 qpair failed and we were unable to recover it. 00:26:01.366 [2024-12-09 18:15:24.273856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.366 [2024-12-09 18:15:24.273969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.366 [2024-12-09 18:15:24.273995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.366 [2024-12-09 18:15:24.274009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.366 [2024-12-09 18:15:24.274021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.366 [2024-12-09 18:15:24.274051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.366 qpair failed and we were unable to recover it. 00:26:01.366 [2024-12-09 18:15:24.283935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.366 [2024-12-09 18:15:24.284050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.366 [2024-12-09 18:15:24.284076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.366 [2024-12-09 18:15:24.284090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.366 [2024-12-09 18:15:24.284102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.366 [2024-12-09 18:15:24.284131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.366 qpair failed and we were unable to recover it. 00:26:01.366 [2024-12-09 18:15:24.293921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.366 [2024-12-09 18:15:24.294011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.366 [2024-12-09 18:15:24.294035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.366 [2024-12-09 18:15:24.294049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.366 [2024-12-09 18:15:24.294061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.366 [2024-12-09 18:15:24.294090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.366 qpair failed and we were unable to recover it. 00:26:01.366 [2024-12-09 18:15:24.303949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.366 [2024-12-09 18:15:24.304038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.366 [2024-12-09 18:15:24.304065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.366 [2024-12-09 18:15:24.304079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.366 [2024-12-09 18:15:24.304091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.366 [2024-12-09 18:15:24.304133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.366 qpair failed and we were unable to recover it. 00:26:01.366 [2024-12-09 18:15:24.313977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.366 [2024-12-09 18:15:24.314085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.366 [2024-12-09 18:15:24.314111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.366 [2024-12-09 18:15:24.314126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.366 [2024-12-09 18:15:24.314138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.366 [2024-12-09 18:15:24.314168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.366 qpair failed and we were unable to recover it. 00:26:01.366 [2024-12-09 18:15:24.324001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.366 [2024-12-09 18:15:24.324088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.366 [2024-12-09 18:15:24.324120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.366 [2024-12-09 18:15:24.324135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.366 [2024-12-09 18:15:24.324147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.366 [2024-12-09 18:15:24.324178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.366 qpair failed and we were unable to recover it. 00:26:01.366 [2024-12-09 18:15:24.334023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.366 [2024-12-09 18:15:24.334115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.367 [2024-12-09 18:15:24.334141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.367 [2024-12-09 18:15:24.334155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.367 [2024-12-09 18:15:24.334167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.367 [2024-12-09 18:15:24.334197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.367 qpair failed and we were unable to recover it. 00:26:01.367 [2024-12-09 18:15:24.344063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.367 [2024-12-09 18:15:24.344159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.367 [2024-12-09 18:15:24.344185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.367 [2024-12-09 18:15:24.344199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.367 [2024-12-09 18:15:24.344210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.367 [2024-12-09 18:15:24.344240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.367 qpair failed and we were unable to recover it. 00:26:01.367 [2024-12-09 18:15:24.354058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.367 [2024-12-09 18:15:24.354140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.367 [2024-12-09 18:15:24.354165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.367 [2024-12-09 18:15:24.354178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.367 [2024-12-09 18:15:24.354190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.367 [2024-12-09 18:15:24.354219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.367 qpair failed and we were unable to recover it. 00:26:01.367 [2024-12-09 18:15:24.364112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.367 [2024-12-09 18:15:24.364191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.367 [2024-12-09 18:15:24.364215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.367 [2024-12-09 18:15:24.364229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.367 [2024-12-09 18:15:24.364247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.367 [2024-12-09 18:15:24.364278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.367 qpair failed and we were unable to recover it. 00:26:01.367 [2024-12-09 18:15:24.374188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.367 [2024-12-09 18:15:24.374296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.367 [2024-12-09 18:15:24.374321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.367 [2024-12-09 18:15:24.374336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.367 [2024-12-09 18:15:24.374347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.367 [2024-12-09 18:15:24.374377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.367 qpair failed and we were unable to recover it. 00:26:01.367 [2024-12-09 18:15:24.384142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.367 [2024-12-09 18:15:24.384249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.367 [2024-12-09 18:15:24.384276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.367 [2024-12-09 18:15:24.384290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.367 [2024-12-09 18:15:24.384301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.367 [2024-12-09 18:15:24.384343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.367 qpair failed and we were unable to recover it. 00:26:01.367 [2024-12-09 18:15:24.394241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.367 [2024-12-09 18:15:24.394331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.367 [2024-12-09 18:15:24.394356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.367 [2024-12-09 18:15:24.394370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.367 [2024-12-09 18:15:24.394382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.367 [2024-12-09 18:15:24.394411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.367 qpair failed and we were unable to recover it. 00:26:01.627 [2024-12-09 18:15:24.404185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.627 [2024-12-09 18:15:24.404270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.627 [2024-12-09 18:15:24.404298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.627 [2024-12-09 18:15:24.404313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.627 [2024-12-09 18:15:24.404324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.627 [2024-12-09 18:15:24.404354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.627 qpair failed and we were unable to recover it. 00:26:01.627 [2024-12-09 18:15:24.414246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.627 [2024-12-09 18:15:24.414342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.627 [2024-12-09 18:15:24.414368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.627 [2024-12-09 18:15:24.414382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.627 [2024-12-09 18:15:24.414394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.627 [2024-12-09 18:15:24.414423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.627 qpair failed and we were unable to recover it. 00:26:01.627 [2024-12-09 18:15:24.424268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.627 [2024-12-09 18:15:24.424363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.627 [2024-12-09 18:15:24.424390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.627 [2024-12-09 18:15:24.424404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.627 [2024-12-09 18:15:24.424416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.627 [2024-12-09 18:15:24.424445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.627 qpair failed and we were unable to recover it. 00:26:01.627 [2024-12-09 18:15:24.434368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.627 [2024-12-09 18:15:24.434473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.627 [2024-12-09 18:15:24.434500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.627 [2024-12-09 18:15:24.434514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.627 [2024-12-09 18:15:24.434525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.627 [2024-12-09 18:15:24.434561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.627 qpair failed and we were unable to recover it. 00:26:01.627 [2024-12-09 18:15:24.444313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.627 [2024-12-09 18:15:24.444436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.627 [2024-12-09 18:15:24.444462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.627 [2024-12-09 18:15:24.444476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.627 [2024-12-09 18:15:24.444488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.627 [2024-12-09 18:15:24.444518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.627 qpair failed and we were unable to recover it. 00:26:01.627 [2024-12-09 18:15:24.454342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.627 [2024-12-09 18:15:24.454431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.627 [2024-12-09 18:15:24.454460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.627 [2024-12-09 18:15:24.454474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.627 [2024-12-09 18:15:24.454486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.627 [2024-12-09 18:15:24.454515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.627 qpair failed and we were unable to recover it. 00:26:01.627 [2024-12-09 18:15:24.464369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.627 [2024-12-09 18:15:24.464456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.627 [2024-12-09 18:15:24.464481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.627 [2024-12-09 18:15:24.464495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.627 [2024-12-09 18:15:24.464506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.627 [2024-12-09 18:15:24.464536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.627 qpair failed and we were unable to recover it. 00:26:01.627 [2024-12-09 18:15:24.474399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.627 [2024-12-09 18:15:24.474533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.627 [2024-12-09 18:15:24.474567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.627 [2024-12-09 18:15:24.474582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.627 [2024-12-09 18:15:24.474594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.627 [2024-12-09 18:15:24.474623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.627 qpair failed and we were unable to recover it. 00:26:01.628 [2024-12-09 18:15:24.484413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.628 [2024-12-09 18:15:24.484506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.628 [2024-12-09 18:15:24.484531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.628 [2024-12-09 18:15:24.484551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.628 [2024-12-09 18:15:24.484565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.628 [2024-12-09 18:15:24.484594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.628 qpair failed and we were unable to recover it. 00:26:01.628 [2024-12-09 18:15:24.494491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.628 [2024-12-09 18:15:24.494598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.628 [2024-12-09 18:15:24.494625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.628 [2024-12-09 18:15:24.494645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.628 [2024-12-09 18:15:24.494657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.628 [2024-12-09 18:15:24.494700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.628 qpair failed and we were unable to recover it. 00:26:01.628 [2024-12-09 18:15:24.504466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.628 [2024-12-09 18:15:24.504564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.628 [2024-12-09 18:15:24.504590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.628 [2024-12-09 18:15:24.504604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.628 [2024-12-09 18:15:24.504616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.628 [2024-12-09 18:15:24.504646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.628 qpair failed and we were unable to recover it. 00:26:01.628 [2024-12-09 18:15:24.514515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.628 [2024-12-09 18:15:24.514607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.628 [2024-12-09 18:15:24.514634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.628 [2024-12-09 18:15:24.514648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.628 [2024-12-09 18:15:24.514660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.628 [2024-12-09 18:15:24.514690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.628 qpair failed and we were unable to recover it. 00:26:01.628 [2024-12-09 18:15:24.524529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.628 [2024-12-09 18:15:24.524653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.628 [2024-12-09 18:15:24.524681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.628 [2024-12-09 18:15:24.524695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.628 [2024-12-09 18:15:24.524707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.628 [2024-12-09 18:15:24.524737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.628 qpair failed and we were unable to recover it. 00:26:01.628 [2024-12-09 18:15:24.534585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.628 [2024-12-09 18:15:24.534700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.628 [2024-12-09 18:15:24.534726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.628 [2024-12-09 18:15:24.534741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.628 [2024-12-09 18:15:24.534752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.628 [2024-12-09 18:15:24.534788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.628 qpair failed and we were unable to recover it. 00:26:01.628 [2024-12-09 18:15:24.544580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.628 [2024-12-09 18:15:24.544664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.628 [2024-12-09 18:15:24.544689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.628 [2024-12-09 18:15:24.544703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.628 [2024-12-09 18:15:24.544716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.628 [2024-12-09 18:15:24.544746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.628 qpair failed and we were unable to recover it. 00:26:01.628 [2024-12-09 18:15:24.554663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.628 [2024-12-09 18:15:24.554791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.628 [2024-12-09 18:15:24.554818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.628 [2024-12-09 18:15:24.554832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.628 [2024-12-09 18:15:24.554844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.628 [2024-12-09 18:15:24.554874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.628 qpair failed and we were unable to recover it. 00:26:01.628 [2024-12-09 18:15:24.564733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.628 [2024-12-09 18:15:24.564844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.628 [2024-12-09 18:15:24.564870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.628 [2024-12-09 18:15:24.564885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.628 [2024-12-09 18:15:24.564897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.628 [2024-12-09 18:15:24.564926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.628 qpair failed and we were unable to recover it. 00:26:01.628 [2024-12-09 18:15:24.574708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.628 [2024-12-09 18:15:24.574803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.628 [2024-12-09 18:15:24.574833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.628 [2024-12-09 18:15:24.574851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.628 [2024-12-09 18:15:24.574863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.628 [2024-12-09 18:15:24.574894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.628 qpair failed and we were unable to recover it. 00:26:01.628 [2024-12-09 18:15:24.584698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.628 [2024-12-09 18:15:24.584817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.628 [2024-12-09 18:15:24.584844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.628 [2024-12-09 18:15:24.584858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.628 [2024-12-09 18:15:24.584871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.628 [2024-12-09 18:15:24.584900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.628 qpair failed and we were unable to recover it. 00:26:01.628 [2024-12-09 18:15:24.594773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.628 [2024-12-09 18:15:24.594865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.628 [2024-12-09 18:15:24.594889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.628 [2024-12-09 18:15:24.594902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.628 [2024-12-09 18:15:24.594914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.628 [2024-12-09 18:15:24.594944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.628 qpair failed and we were unable to recover it. 00:26:01.628 [2024-12-09 18:15:24.604793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.628 [2024-12-09 18:15:24.604876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.628 [2024-12-09 18:15:24.604901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.628 [2024-12-09 18:15:24.604915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.628 [2024-12-09 18:15:24.604927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.628 [2024-12-09 18:15:24.604956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.628 qpair failed and we were unable to recover it. 00:26:01.628 [2024-12-09 18:15:24.614827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.628 [2024-12-09 18:15:24.614914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.628 [2024-12-09 18:15:24.614940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.629 [2024-12-09 18:15:24.614954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.629 [2024-12-09 18:15:24.614966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.629 [2024-12-09 18:15:24.614996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.629 qpair failed and we were unable to recover it. 00:26:01.629 [2024-12-09 18:15:24.624860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.629 [2024-12-09 18:15:24.624976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.629 [2024-12-09 18:15:24.625002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.629 [2024-12-09 18:15:24.625022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.629 [2024-12-09 18:15:24.625035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.629 [2024-12-09 18:15:24.625064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.629 qpair failed and we were unable to recover it. 00:26:01.629 [2024-12-09 18:15:24.634907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.629 [2024-12-09 18:15:24.635027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.629 [2024-12-09 18:15:24.635053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.629 [2024-12-09 18:15:24.635067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.629 [2024-12-09 18:15:24.635079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.629 [2024-12-09 18:15:24.635109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.629 qpair failed and we were unable to recover it. 00:26:01.629 [2024-12-09 18:15:24.644882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.629 [2024-12-09 18:15:24.644963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.629 [2024-12-09 18:15:24.644988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.629 [2024-12-09 18:15:24.645003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.629 [2024-12-09 18:15:24.645015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.629 [2024-12-09 18:15:24.645045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.629 qpair failed and we were unable to recover it. 00:26:01.629 [2024-12-09 18:15:24.654938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.629 [2024-12-09 18:15:24.655025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.629 [2024-12-09 18:15:24.655049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.629 [2024-12-09 18:15:24.655064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.629 [2024-12-09 18:15:24.655076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.629 [2024-12-09 18:15:24.655105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.629 qpair failed and we were unable to recover it. 00:26:01.629 [2024-12-09 18:15:24.664923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.629 [2024-12-09 18:15:24.665006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.629 [2024-12-09 18:15:24.665031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.629 [2024-12-09 18:15:24.665044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.629 [2024-12-09 18:15:24.665056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.629 [2024-12-09 18:15:24.665092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.629 qpair failed and we were unable to recover it. 00:26:01.891 [2024-12-09 18:15:24.675012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.891 [2024-12-09 18:15:24.675110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.891 [2024-12-09 18:15:24.675135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.891 [2024-12-09 18:15:24.675150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.891 [2024-12-09 18:15:24.675161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.891 [2024-12-09 18:15:24.675191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.891 qpair failed and we were unable to recover it. 00:26:01.891 [2024-12-09 18:15:24.684987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.891 [2024-12-09 18:15:24.685070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.891 [2024-12-09 18:15:24.685096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.891 [2024-12-09 18:15:24.685110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.891 [2024-12-09 18:15:24.685122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.891 [2024-12-09 18:15:24.685152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.891 qpair failed and we were unable to recover it. 00:26:01.891 [2024-12-09 18:15:24.695060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.891 [2024-12-09 18:15:24.695153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.891 [2024-12-09 18:15:24.695183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.891 [2024-12-09 18:15:24.695198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.891 [2024-12-09 18:15:24.695210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.891 [2024-12-09 18:15:24.695240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.891 qpair failed and we were unable to recover it. 00:26:01.891 [2024-12-09 18:15:24.705143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.891 [2024-12-09 18:15:24.705235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.891 [2024-12-09 18:15:24.705261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.891 [2024-12-09 18:15:24.705275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.891 [2024-12-09 18:15:24.705287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.891 [2024-12-09 18:15:24.705317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.891 qpair failed and we were unable to recover it. 00:26:01.891 [2024-12-09 18:15:24.715100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.891 [2024-12-09 18:15:24.715183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.891 [2024-12-09 18:15:24.715211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.891 [2024-12-09 18:15:24.715226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.891 [2024-12-09 18:15:24.715238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.891 [2024-12-09 18:15:24.715268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.891 qpair failed and we were unable to recover it. 00:26:01.891 [2024-12-09 18:15:24.725094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.891 [2024-12-09 18:15:24.725178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.891 [2024-12-09 18:15:24.725202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.891 [2024-12-09 18:15:24.725216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.891 [2024-12-09 18:15:24.725228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.891 [2024-12-09 18:15:24.725257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.891 qpair failed and we were unable to recover it. 00:26:01.891 [2024-12-09 18:15:24.735149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.891 [2024-12-09 18:15:24.735240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.891 [2024-12-09 18:15:24.735265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.891 [2024-12-09 18:15:24.735279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.891 [2024-12-09 18:15:24.735291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.891 [2024-12-09 18:15:24.735321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.891 qpair failed and we were unable to recover it. 00:26:01.891 [2024-12-09 18:15:24.745161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.891 [2024-12-09 18:15:24.745244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.891 [2024-12-09 18:15:24.745270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.891 [2024-12-09 18:15:24.745284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.891 [2024-12-09 18:15:24.745296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.891 [2024-12-09 18:15:24.745326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.891 qpair failed and we were unable to recover it. 00:26:01.891 [2024-12-09 18:15:24.755316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.891 [2024-12-09 18:15:24.755448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.891 [2024-12-09 18:15:24.755484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.891 [2024-12-09 18:15:24.755500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.891 [2024-12-09 18:15:24.755511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.891 [2024-12-09 18:15:24.755541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.891 qpair failed and we were unable to recover it. 00:26:01.891 [2024-12-09 18:15:24.765243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.891 [2024-12-09 18:15:24.765352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.891 [2024-12-09 18:15:24.765377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.891 [2024-12-09 18:15:24.765391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.891 [2024-12-09 18:15:24.765403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.891 [2024-12-09 18:15:24.765432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.891 qpair failed and we were unable to recover it. 00:26:01.891 [2024-12-09 18:15:24.775239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.892 [2024-12-09 18:15:24.775333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.892 [2024-12-09 18:15:24.775357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.892 [2024-12-09 18:15:24.775371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.892 [2024-12-09 18:15:24.775383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.892 [2024-12-09 18:15:24.775411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.892 qpair failed and we were unable to recover it. 00:26:01.892 [2024-12-09 18:15:24.785344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.892 [2024-12-09 18:15:24.785428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.892 [2024-12-09 18:15:24.785452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.892 [2024-12-09 18:15:24.785466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.892 [2024-12-09 18:15:24.785478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.892 [2024-12-09 18:15:24.785520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.892 qpair failed and we were unable to recover it. 00:26:01.892 [2024-12-09 18:15:24.795321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.892 [2024-12-09 18:15:24.795440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.892 [2024-12-09 18:15:24.795466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.892 [2024-12-09 18:15:24.795480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.892 [2024-12-09 18:15:24.795498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.892 [2024-12-09 18:15:24.795528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.892 qpair failed and we were unable to recover it. 00:26:01.892 [2024-12-09 18:15:24.805372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.892 [2024-12-09 18:15:24.805460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.892 [2024-12-09 18:15:24.805486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.892 [2024-12-09 18:15:24.805505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.892 [2024-12-09 18:15:24.805517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.892 [2024-12-09 18:15:24.805555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.892 qpair failed and we were unable to recover it. 00:26:01.892 [2024-12-09 18:15:24.815392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.892 [2024-12-09 18:15:24.815515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.892 [2024-12-09 18:15:24.815540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.892 [2024-12-09 18:15:24.815563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.892 [2024-12-09 18:15:24.815576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.892 [2024-12-09 18:15:24.815606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.892 qpair failed and we were unable to recover it. 00:26:01.892 [2024-12-09 18:15:24.825396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.892 [2024-12-09 18:15:24.825481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.892 [2024-12-09 18:15:24.825507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.892 [2024-12-09 18:15:24.825521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.892 [2024-12-09 18:15:24.825533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.892 [2024-12-09 18:15:24.825574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.892 qpair failed and we were unable to recover it. 00:26:01.892 [2024-12-09 18:15:24.835435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.892 [2024-12-09 18:15:24.835513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.892 [2024-12-09 18:15:24.835538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.892 [2024-12-09 18:15:24.835560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.892 [2024-12-09 18:15:24.835573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.892 [2024-12-09 18:15:24.835616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.892 qpair failed and we were unable to recover it. 00:26:01.892 [2024-12-09 18:15:24.845456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.892 [2024-12-09 18:15:24.845540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.892 [2024-12-09 18:15:24.845576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.892 [2024-12-09 18:15:24.845592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.892 [2024-12-09 18:15:24.845603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.892 [2024-12-09 18:15:24.845634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.892 qpair failed and we were unable to recover it. 00:26:01.892 [2024-12-09 18:15:24.855490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.892 [2024-12-09 18:15:24.855600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.892 [2024-12-09 18:15:24.855630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.892 [2024-12-09 18:15:24.855647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.892 [2024-12-09 18:15:24.855659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.892 [2024-12-09 18:15:24.855690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.892 qpair failed and we were unable to recover it. 00:26:01.892 [2024-12-09 18:15:24.865490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.892 [2024-12-09 18:15:24.865584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.892 [2024-12-09 18:15:24.865609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.892 [2024-12-09 18:15:24.865623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.892 [2024-12-09 18:15:24.865634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.892 [2024-12-09 18:15:24.865668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.892 qpair failed and we were unable to recover it. 00:26:01.892 [2024-12-09 18:15:24.875570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.892 [2024-12-09 18:15:24.875680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.892 [2024-12-09 18:15:24.875706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.892 [2024-12-09 18:15:24.875720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.892 [2024-12-09 18:15:24.875732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.892 [2024-12-09 18:15:24.875763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.892 qpair failed and we were unable to recover it. 00:26:01.892 [2024-12-09 18:15:24.885569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.892 [2024-12-09 18:15:24.885673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.892 [2024-12-09 18:15:24.885708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.892 [2024-12-09 18:15:24.885725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.892 [2024-12-09 18:15:24.885737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.892 [2024-12-09 18:15:24.885767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.892 qpair failed and we were unable to recover it. 00:26:01.892 [2024-12-09 18:15:24.895646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.892 [2024-12-09 18:15:24.895745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.892 [2024-12-09 18:15:24.895771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.892 [2024-12-09 18:15:24.895785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.892 [2024-12-09 18:15:24.895797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.892 [2024-12-09 18:15:24.895827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.892 qpair failed and we were unable to recover it. 00:26:01.892 [2024-12-09 18:15:24.905626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.892 [2024-12-09 18:15:24.905708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.892 [2024-12-09 18:15:24.905733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.892 [2024-12-09 18:15:24.905748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.893 [2024-12-09 18:15:24.905760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.893 [2024-12-09 18:15:24.905806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.893 qpair failed and we were unable to recover it. 00:26:01.893 [2024-12-09 18:15:24.915641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.893 [2024-12-09 18:15:24.915730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.893 [2024-12-09 18:15:24.915756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.893 [2024-12-09 18:15:24.915770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.893 [2024-12-09 18:15:24.915782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.893 [2024-12-09 18:15:24.915812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.893 qpair failed and we were unable to recover it. 00:26:01.893 [2024-12-09 18:15:24.925770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.893 [2024-12-09 18:15:24.925855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.893 [2024-12-09 18:15:24.925881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.893 [2024-12-09 18:15:24.925896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.893 [2024-12-09 18:15:24.925913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:01.893 [2024-12-09 18:15:24.925944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.893 qpair failed and we were unable to recover it. 00:26:02.152 [2024-12-09 18:15:24.935714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.152 [2024-12-09 18:15:24.935808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.152 [2024-12-09 18:15:24.935834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.152 [2024-12-09 18:15:24.935848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.152 [2024-12-09 18:15:24.935860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.152 [2024-12-09 18:15:24.935889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.152 qpair failed and we were unable to recover it. 00:26:02.153 [2024-12-09 18:15:24.945816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.153 [2024-12-09 18:15:24.945900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.153 [2024-12-09 18:15:24.945927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.153 [2024-12-09 18:15:24.945942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.153 [2024-12-09 18:15:24.945954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.153 [2024-12-09 18:15:24.945984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.153 qpair failed and we were unable to recover it. 00:26:02.153 [2024-12-09 18:15:24.955741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.153 [2024-12-09 18:15:24.955826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.153 [2024-12-09 18:15:24.955852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.153 [2024-12-09 18:15:24.955866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.153 [2024-12-09 18:15:24.955878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.153 [2024-12-09 18:15:24.955907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.153 qpair failed and we were unable to recover it. 00:26:02.153 [2024-12-09 18:15:24.965767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.153 [2024-12-09 18:15:24.965862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.153 [2024-12-09 18:15:24.965887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.153 [2024-12-09 18:15:24.965901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.153 [2024-12-09 18:15:24.965913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.153 [2024-12-09 18:15:24.965943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.153 qpair failed and we were unable to recover it. 00:26:02.153 [2024-12-09 18:15:24.975807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.153 [2024-12-09 18:15:24.975897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.153 [2024-12-09 18:15:24.975921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.153 [2024-12-09 18:15:24.975934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.153 [2024-12-09 18:15:24.975946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.153 [2024-12-09 18:15:24.975976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.153 qpair failed and we were unable to recover it. 00:26:02.153 [2024-12-09 18:15:24.985896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.153 [2024-12-09 18:15:24.986000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.153 [2024-12-09 18:15:24.986029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.153 [2024-12-09 18:15:24.986043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.153 [2024-12-09 18:15:24.986055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.153 [2024-12-09 18:15:24.986085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.153 qpair failed and we were unable to recover it. 00:26:02.153 [2024-12-09 18:15:24.995870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.153 [2024-12-09 18:15:24.995991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.153 [2024-12-09 18:15:24.996017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.153 [2024-12-09 18:15:24.996030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.153 [2024-12-09 18:15:24.996042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.153 [2024-12-09 18:15:24.996071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.153 qpair failed and we were unable to recover it. 00:26:02.153 [2024-12-09 18:15:25.005896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.153 [2024-12-09 18:15:25.005980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.153 [2024-12-09 18:15:25.006004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.153 [2024-12-09 18:15:25.006018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.153 [2024-12-09 18:15:25.006030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.153 [2024-12-09 18:15:25.006073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.153 qpair failed and we were unable to recover it. 00:26:02.153 [2024-12-09 18:15:25.015910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.153 [2024-12-09 18:15:25.015998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.153 [2024-12-09 18:15:25.016029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.153 [2024-12-09 18:15:25.016045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.153 [2024-12-09 18:15:25.016056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.153 [2024-12-09 18:15:25.016085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.153 qpair failed and we were unable to recover it. 00:26:02.153 [2024-12-09 18:15:25.026068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.153 [2024-12-09 18:15:25.026158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.153 [2024-12-09 18:15:25.026184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.153 [2024-12-09 18:15:25.026198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.153 [2024-12-09 18:15:25.026210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.153 [2024-12-09 18:15:25.026239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.153 qpair failed and we were unable to recover it. 00:26:02.153 [2024-12-09 18:15:25.035996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.153 [2024-12-09 18:15:25.036081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.153 [2024-12-09 18:15:25.036122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.153 [2024-12-09 18:15:25.036136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.153 [2024-12-09 18:15:25.036148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.153 [2024-12-09 18:15:25.036190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.153 qpair failed and we were unable to recover it. 00:26:02.153 [2024-12-09 18:15:25.045988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.153 [2024-12-09 18:15:25.046107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.153 [2024-12-09 18:15:25.046133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.153 [2024-12-09 18:15:25.046147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.153 [2024-12-09 18:15:25.046161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.153 [2024-12-09 18:15:25.046204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.153 qpair failed and we were unable to recover it. 00:26:02.153 [2024-12-09 18:15:25.056062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.153 [2024-12-09 18:15:25.056162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.153 [2024-12-09 18:15:25.056187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.153 [2024-12-09 18:15:25.056207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.153 [2024-12-09 18:15:25.056219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.153 [2024-12-09 18:15:25.056249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.153 qpair failed and we were unable to recover it. 00:26:02.153 [2024-12-09 18:15:25.066055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.153 [2024-12-09 18:15:25.066139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.153 [2024-12-09 18:15:25.066165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.153 [2024-12-09 18:15:25.066180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.153 [2024-12-09 18:15:25.066192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.153 [2024-12-09 18:15:25.066234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.153 qpair failed and we were unable to recover it. 00:26:02.153 [2024-12-09 18:15:25.076073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.154 [2024-12-09 18:15:25.076158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.154 [2024-12-09 18:15:25.076182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.154 [2024-12-09 18:15:25.076196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.154 [2024-12-09 18:15:25.076208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.154 [2024-12-09 18:15:25.076238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.154 qpair failed and we were unable to recover it. 00:26:02.154 [2024-12-09 18:15:25.086083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.154 [2024-12-09 18:15:25.086205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.154 [2024-12-09 18:15:25.086230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.154 [2024-12-09 18:15:25.086244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.154 [2024-12-09 18:15:25.086256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.154 [2024-12-09 18:15:25.086292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.154 qpair failed and we were unable to recover it. 00:26:02.154 [2024-12-09 18:15:25.096195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.154 [2024-12-09 18:15:25.096307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.154 [2024-12-09 18:15:25.096333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.154 [2024-12-09 18:15:25.096347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.154 [2024-12-09 18:15:25.096359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.154 [2024-12-09 18:15:25.096394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.154 qpair failed and we were unable to recover it. 00:26:02.154 [2024-12-09 18:15:25.106176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.154 [2024-12-09 18:15:25.106304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.154 [2024-12-09 18:15:25.106329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.154 [2024-12-09 18:15:25.106343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.154 [2024-12-09 18:15:25.106354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.154 [2024-12-09 18:15:25.106384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.154 qpair failed and we were unable to recover it. 00:26:02.154 [2024-12-09 18:15:25.116182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.154 [2024-12-09 18:15:25.116269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.154 [2024-12-09 18:15:25.116295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.154 [2024-12-09 18:15:25.116309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.154 [2024-12-09 18:15:25.116321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.154 [2024-12-09 18:15:25.116350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.154 qpair failed and we were unable to recover it. 00:26:02.154 [2024-12-09 18:15:25.126215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.154 [2024-12-09 18:15:25.126300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.154 [2024-12-09 18:15:25.126327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.154 [2024-12-09 18:15:25.126342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.154 [2024-12-09 18:15:25.126355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.154 [2024-12-09 18:15:25.126385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.154 qpair failed and we were unable to recover it. 00:26:02.154 [2024-12-09 18:15:25.136266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.154 [2024-12-09 18:15:25.136358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.154 [2024-12-09 18:15:25.136383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.154 [2024-12-09 18:15:25.136397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.154 [2024-12-09 18:15:25.136409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.154 [2024-12-09 18:15:25.136439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.154 qpair failed and we were unable to recover it. 00:26:02.154 [2024-12-09 18:15:25.146258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.154 [2024-12-09 18:15:25.146353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.154 [2024-12-09 18:15:25.146382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.154 [2024-12-09 18:15:25.146397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.154 [2024-12-09 18:15:25.146413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.154 [2024-12-09 18:15:25.146444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.154 qpair failed and we were unable to recover it. 00:26:02.154 [2024-12-09 18:15:25.156321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.154 [2024-12-09 18:15:25.156409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.154 [2024-12-09 18:15:25.156434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.154 [2024-12-09 18:15:25.156447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.154 [2024-12-09 18:15:25.156460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.154 [2024-12-09 18:15:25.156489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.154 qpair failed and we were unable to recover it. 00:26:02.154 [2024-12-09 18:15:25.166305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.154 [2024-12-09 18:15:25.166393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.154 [2024-12-09 18:15:25.166422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.154 [2024-12-09 18:15:25.166436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.154 [2024-12-09 18:15:25.166448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.154 [2024-12-09 18:15:25.166478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.154 qpair failed and we were unable to recover it. 00:26:02.154 [2024-12-09 18:15:25.176356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.154 [2024-12-09 18:15:25.176440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.154 [2024-12-09 18:15:25.176464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.154 [2024-12-09 18:15:25.176477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.154 [2024-12-09 18:15:25.176489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.154 [2024-12-09 18:15:25.176518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.154 qpair failed and we were unable to recover it. 00:26:02.154 [2024-12-09 18:15:25.186363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.154 [2024-12-09 18:15:25.186444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.154 [2024-12-09 18:15:25.186469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.154 [2024-12-09 18:15:25.186489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.154 [2024-12-09 18:15:25.186502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.154 [2024-12-09 18:15:25.186535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.154 qpair failed and we were unable to recover it. 00:26:02.412 [2024-12-09 18:15:25.196405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.412 [2024-12-09 18:15:25.196488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.412 [2024-12-09 18:15:25.196515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.412 [2024-12-09 18:15:25.196529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.412 [2024-12-09 18:15:25.196541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.412 [2024-12-09 18:15:25.196579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.412 qpair failed and we were unable to recover it. 00:26:02.412 [2024-12-09 18:15:25.206428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.412 [2024-12-09 18:15:25.206539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.412 [2024-12-09 18:15:25.206573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.412 [2024-12-09 18:15:25.206587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.412 [2024-12-09 18:15:25.206599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.412 [2024-12-09 18:15:25.206629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.412 qpair failed and we were unable to recover it. 00:26:02.412 [2024-12-09 18:15:25.216469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.412 [2024-12-09 18:15:25.216566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.412 [2024-12-09 18:15:25.216591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.412 [2024-12-09 18:15:25.216604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.412 [2024-12-09 18:15:25.216616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.412 [2024-12-09 18:15:25.216646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.412 qpair failed and we were unable to recover it. 00:26:02.412 [2024-12-09 18:15:25.226558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.412 [2024-12-09 18:15:25.226656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.412 [2024-12-09 18:15:25.226682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.412 [2024-12-09 18:15:25.226696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.412 [2024-12-09 18:15:25.226708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.412 [2024-12-09 18:15:25.226744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.412 qpair failed and we were unable to recover it. 00:26:02.412 [2024-12-09 18:15:25.236502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.412 [2024-12-09 18:15:25.236611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.413 [2024-12-09 18:15:25.236638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.413 [2024-12-09 18:15:25.236653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.413 [2024-12-09 18:15:25.236665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.413 [2024-12-09 18:15:25.236695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.413 qpair failed and we were unable to recover it. 00:26:02.413 [2024-12-09 18:15:25.246559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.413 [2024-12-09 18:15:25.246647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.413 [2024-12-09 18:15:25.246672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.413 [2024-12-09 18:15:25.246687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.413 [2024-12-09 18:15:25.246699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.413 [2024-12-09 18:15:25.246729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.413 qpair failed and we were unable to recover it. 00:26:02.413 [2024-12-09 18:15:25.256616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.413 [2024-12-09 18:15:25.256711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.413 [2024-12-09 18:15:25.256736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.413 [2024-12-09 18:15:25.256750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.413 [2024-12-09 18:15:25.256762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efef8000b90 00:26:02.413 [2024-12-09 18:15:25.256792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.413 qpair failed and we were unable to recover it. 00:26:02.413 [2024-12-09 18:15:25.266619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.413 [2024-12-09 18:15:25.266705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.413 [2024-12-09 18:15:25.266737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.413 [2024-12-09 18:15:25.266753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.413 [2024-12-09 18:15:25.266765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff04000b90 00:26:02.413 [2024-12-09 18:15:25.266797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:02.413 qpair failed and we were unable to recover it. 00:26:02.413 [2024-12-09 18:15:25.276658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.413 [2024-12-09 18:15:25.276789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.413 [2024-12-09 18:15:25.276821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.413 [2024-12-09 18:15:25.276837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.413 [2024-12-09 18:15:25.276849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.413 [2024-12-09 18:15:25.276878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.413 qpair failed and we were unable to recover it. 00:26:02.413 [2024-12-09 18:15:25.286701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.413 [2024-12-09 18:15:25.286821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.413 [2024-12-09 18:15:25.286848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.413 [2024-12-09 18:15:25.286862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.413 [2024-12-09 18:15:25.286875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.413 [2024-12-09 18:15:25.286903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.413 qpair failed and we were unable to recover it. 00:26:02.413 [2024-12-09 18:15:25.296759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.413 [2024-12-09 18:15:25.296858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.413 [2024-12-09 18:15:25.296885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.413 [2024-12-09 18:15:25.296899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.413 [2024-12-09 18:15:25.296911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.413 [2024-12-09 18:15:25.296939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.413 qpair failed and we were unable to recover it. 00:26:02.413 [2024-12-09 18:15:25.306718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.413 [2024-12-09 18:15:25.306813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.413 [2024-12-09 18:15:25.306838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.413 [2024-12-09 18:15:25.306852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.413 [2024-12-09 18:15:25.306864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.413 [2024-12-09 18:15:25.306893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.413 qpair failed and we were unable to recover it. 00:26:02.413 [2024-12-09 18:15:25.316758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.413 [2024-12-09 18:15:25.316847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.413 [2024-12-09 18:15:25.316878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.413 [2024-12-09 18:15:25.316893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.413 [2024-12-09 18:15:25.316904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.413 [2024-12-09 18:15:25.316932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.413 qpair failed and we were unable to recover it. 00:26:02.413 [2024-12-09 18:15:25.326906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.413 [2024-12-09 18:15:25.327041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.413 [2024-12-09 18:15:25.327067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.413 [2024-12-09 18:15:25.327080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.413 [2024-12-09 18:15:25.327092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.413 [2024-12-09 18:15:25.327120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.413 qpair failed and we were unable to recover it. 00:26:02.413 [2024-12-09 18:15:25.336836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.413 [2024-12-09 18:15:25.336930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.413 [2024-12-09 18:15:25.336959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.413 [2024-12-09 18:15:25.336975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.413 [2024-12-09 18:15:25.336987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.413 [2024-12-09 18:15:25.337017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.413 qpair failed and we were unable to recover it. 00:26:02.413 [2024-12-09 18:15:25.346843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.413 [2024-12-09 18:15:25.346933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.413 [2024-12-09 18:15:25.346959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.413 [2024-12-09 18:15:25.346974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.413 [2024-12-09 18:15:25.346986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.413 [2024-12-09 18:15:25.347014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.413 qpair failed and we were unable to recover it. 00:26:02.413 [2024-12-09 18:15:25.356890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.413 [2024-12-09 18:15:25.356970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.413 [2024-12-09 18:15:25.356996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.413 [2024-12-09 18:15:25.357010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.413 [2024-12-09 18:15:25.357022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.413 [2024-12-09 18:15:25.357059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.413 qpair failed and we were unable to recover it. 00:26:02.413 [2024-12-09 18:15:25.366905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.413 [2024-12-09 18:15:25.366991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.413 [2024-12-09 18:15:25.367016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.413 [2024-12-09 18:15:25.367030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.413 [2024-12-09 18:15:25.367042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.414 [2024-12-09 18:15:25.367069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.414 qpair failed and we were unable to recover it. 00:26:02.414 [2024-12-09 18:15:25.376917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.414 [2024-12-09 18:15:25.377006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.414 [2024-12-09 18:15:25.377031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.414 [2024-12-09 18:15:25.377045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.414 [2024-12-09 18:15:25.377057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.414 [2024-12-09 18:15:25.377084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.414 qpair failed and we were unable to recover it. 00:26:02.414 [2024-12-09 18:15:25.386957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.414 [2024-12-09 18:15:25.387043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.414 [2024-12-09 18:15:25.387068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.414 [2024-12-09 18:15:25.387082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.414 [2024-12-09 18:15:25.387094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.414 [2024-12-09 18:15:25.387122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.414 qpair failed and we were unable to recover it. 00:26:02.414 [2024-12-09 18:15:25.396987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.414 [2024-12-09 18:15:25.397074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.414 [2024-12-09 18:15:25.397100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.414 [2024-12-09 18:15:25.397113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.414 [2024-12-09 18:15:25.397125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.414 [2024-12-09 18:15:25.397153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.414 qpair failed and we were unable to recover it. 00:26:02.414 [2024-12-09 18:15:25.406998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.414 [2024-12-09 18:15:25.407096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.414 [2024-12-09 18:15:25.407121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.414 [2024-12-09 18:15:25.407135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.414 [2024-12-09 18:15:25.407147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.414 [2024-12-09 18:15:25.407175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.414 qpair failed and we were unable to recover it. 00:26:02.414 [2024-12-09 18:15:25.417037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.414 [2024-12-09 18:15:25.417129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.414 [2024-12-09 18:15:25.417155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.414 [2024-12-09 18:15:25.417169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.414 [2024-12-09 18:15:25.417181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.414 [2024-12-09 18:15:25.417208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.414 qpair failed and we were unable to recover it. 00:26:02.414 [2024-12-09 18:15:25.427049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.414 [2024-12-09 18:15:25.427127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.414 [2024-12-09 18:15:25.427153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.414 [2024-12-09 18:15:25.427167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.414 [2024-12-09 18:15:25.427179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.414 [2024-12-09 18:15:25.427207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.414 qpair failed and we were unable to recover it. 00:26:02.414 [2024-12-09 18:15:25.437099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.414 [2024-12-09 18:15:25.437180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.414 [2024-12-09 18:15:25.437205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.414 [2024-12-09 18:15:25.437220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.414 [2024-12-09 18:15:25.437231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.414 [2024-12-09 18:15:25.437259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.414 qpair failed and we were unable to recover it. 00:26:02.414 [2024-12-09 18:15:25.447120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.414 [2024-12-09 18:15:25.447208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.414 [2024-12-09 18:15:25.447239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.414 [2024-12-09 18:15:25.447254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.414 [2024-12-09 18:15:25.447266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.414 [2024-12-09 18:15:25.447294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.414 qpair failed and we were unable to recover it. 00:26:02.672 [2024-12-09 18:15:25.457171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.672 [2024-12-09 18:15:25.457281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.672 [2024-12-09 18:15:25.457307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.672 [2024-12-09 18:15:25.457322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.672 [2024-12-09 18:15:25.457334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.672 [2024-12-09 18:15:25.457363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.672 qpair failed and we were unable to recover it. 00:26:02.672 [2024-12-09 18:15:25.467186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.672 [2024-12-09 18:15:25.467271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.672 [2024-12-09 18:15:25.467297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.672 [2024-12-09 18:15:25.467311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.672 [2024-12-09 18:15:25.467323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.672 [2024-12-09 18:15:25.467351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.672 qpair failed and we were unable to recover it. 00:26:02.672 [2024-12-09 18:15:25.477220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.672 [2024-12-09 18:15:25.477311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.672 [2024-12-09 18:15:25.477338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.672 [2024-12-09 18:15:25.477353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.672 [2024-12-09 18:15:25.477364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.672 [2024-12-09 18:15:25.477392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.672 qpair failed and we were unable to recover it. 00:26:02.672 [2024-12-09 18:15:25.487281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.672 [2024-12-09 18:15:25.487368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.672 [2024-12-09 18:15:25.487396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.672 [2024-12-09 18:15:25.487411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.672 [2024-12-09 18:15:25.487428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.672 [2024-12-09 18:15:25.487458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.672 qpair failed and we were unable to recover it. 00:26:02.672 [2024-12-09 18:15:25.497272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.672 [2024-12-09 18:15:25.497366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.672 [2024-12-09 18:15:25.497393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.672 [2024-12-09 18:15:25.497407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.672 [2024-12-09 18:15:25.497419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.672 [2024-12-09 18:15:25.497447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.672 qpair failed and we were unable to recover it. 00:26:02.672 [2024-12-09 18:15:25.507296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.672 [2024-12-09 18:15:25.507430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.672 [2024-12-09 18:15:25.507456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.672 [2024-12-09 18:15:25.507470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.672 [2024-12-09 18:15:25.507482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.672 [2024-12-09 18:15:25.507510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.672 qpair failed and we were unable to recover it. 00:26:02.672 [2024-12-09 18:15:25.517406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.672 [2024-12-09 18:15:25.517525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.672 [2024-12-09 18:15:25.517556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.672 [2024-12-09 18:15:25.517572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.672 [2024-12-09 18:15:25.517584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.672 [2024-12-09 18:15:25.517613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.672 qpair failed and we were unable to recover it. 00:26:02.672 [2024-12-09 18:15:25.527372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.672 [2024-12-09 18:15:25.527456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.672 [2024-12-09 18:15:25.527481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.672 [2024-12-09 18:15:25.527495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.672 [2024-12-09 18:15:25.527507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.672 [2024-12-09 18:15:25.527534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.672 qpair failed and we were unable to recover it. 00:26:02.672 [2024-12-09 18:15:25.537383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.672 [2024-12-09 18:15:25.537510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.672 [2024-12-09 18:15:25.537536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.672 [2024-12-09 18:15:25.537560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.673 [2024-12-09 18:15:25.537573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.673 [2024-12-09 18:15:25.537601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.673 qpair failed and we were unable to recover it. 00:26:02.673 [2024-12-09 18:15:25.547510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.673 [2024-12-09 18:15:25.547609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.673 [2024-12-09 18:15:25.547639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.673 [2024-12-09 18:15:25.547655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.673 [2024-12-09 18:15:25.547667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.673 [2024-12-09 18:15:25.547696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.673 qpair failed and we were unable to recover it. 00:26:02.673 [2024-12-09 18:15:25.557486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.673 [2024-12-09 18:15:25.557578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.673 [2024-12-09 18:15:25.557605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.673 [2024-12-09 18:15:25.557618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.673 [2024-12-09 18:15:25.557630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.673 [2024-12-09 18:15:25.557659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.673 qpair failed and we were unable to recover it. 00:26:02.673 [2024-12-09 18:15:25.567478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.673 [2024-12-09 18:15:25.567565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.673 [2024-12-09 18:15:25.567591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.673 [2024-12-09 18:15:25.567605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.673 [2024-12-09 18:15:25.567616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.673 [2024-12-09 18:15:25.567644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.673 qpair failed and we were unable to recover it. 00:26:02.673 [2024-12-09 18:15:25.577494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.673 [2024-12-09 18:15:25.577595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.673 [2024-12-09 18:15:25.577626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.673 [2024-12-09 18:15:25.577641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.673 [2024-12-09 18:15:25.577653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.673 [2024-12-09 18:15:25.577681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.673 qpair failed and we were unable to recover it. 00:26:02.673 [2024-12-09 18:15:25.587543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.673 [2024-12-09 18:15:25.587642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.673 [2024-12-09 18:15:25.587667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.673 [2024-12-09 18:15:25.587681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.673 [2024-12-09 18:15:25.587692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.673 [2024-12-09 18:15:25.587721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.673 qpair failed and we were unable to recover it. 00:26:02.673 [2024-12-09 18:15:25.597626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.673 [2024-12-09 18:15:25.597729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.673 [2024-12-09 18:15:25.597754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.673 [2024-12-09 18:15:25.597768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.673 [2024-12-09 18:15:25.597780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.673 [2024-12-09 18:15:25.597808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.673 qpair failed and we were unable to recover it. 00:26:02.673 [2024-12-09 18:15:25.607629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.673 [2024-12-09 18:15:25.607722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.673 [2024-12-09 18:15:25.607748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.673 [2024-12-09 18:15:25.607767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.673 [2024-12-09 18:15:25.607780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.673 [2024-12-09 18:15:25.607809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.673 qpair failed and we were unable to recover it. 00:26:02.673 [2024-12-09 18:15:25.617669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.673 [2024-12-09 18:15:25.617762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.673 [2024-12-09 18:15:25.617787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.673 [2024-12-09 18:15:25.617801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.673 [2024-12-09 18:15:25.617819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.673 [2024-12-09 18:15:25.617848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.673 qpair failed and we were unable to recover it. 00:26:02.673 [2024-12-09 18:15:25.627631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.673 [2024-12-09 18:15:25.627728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.673 [2024-12-09 18:15:25.627753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.673 [2024-12-09 18:15:25.627767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.673 [2024-12-09 18:15:25.627779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.673 [2024-12-09 18:15:25.627807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.673 qpair failed and we were unable to recover it. 00:26:02.673 [2024-12-09 18:15:25.637692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.673 [2024-12-09 18:15:25.637778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.673 [2024-12-09 18:15:25.637803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.673 [2024-12-09 18:15:25.637817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.673 [2024-12-09 18:15:25.637829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.673 [2024-12-09 18:15:25.637857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.673 qpair failed and we were unable to recover it. 00:26:02.673 [2024-12-09 18:15:25.647784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.673 [2024-12-09 18:15:25.647870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.673 [2024-12-09 18:15:25.647894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.673 [2024-12-09 18:15:25.647908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.673 [2024-12-09 18:15:25.647920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.673 [2024-12-09 18:15:25.647947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.673 qpair failed and we were unable to recover it. 00:26:02.673 [2024-12-09 18:15:25.657737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.673 [2024-12-09 18:15:25.657825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.673 [2024-12-09 18:15:25.657851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.673 [2024-12-09 18:15:25.657865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.673 [2024-12-09 18:15:25.657877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.673 [2024-12-09 18:15:25.657904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.673 qpair failed and we were unable to recover it. 00:26:02.673 [2024-12-09 18:15:25.667761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.673 [2024-12-09 18:15:25.667843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.673 [2024-12-09 18:15:25.667868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.673 [2024-12-09 18:15:25.667882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.673 [2024-12-09 18:15:25.667894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.673 [2024-12-09 18:15:25.667921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.673 qpair failed and we were unable to recover it. 00:26:02.673 [2024-12-09 18:15:25.677788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.674 [2024-12-09 18:15:25.677867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.674 [2024-12-09 18:15:25.677892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.674 [2024-12-09 18:15:25.677905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.674 [2024-12-09 18:15:25.677917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.674 [2024-12-09 18:15:25.677946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.674 qpair failed and we were unable to recover it. 00:26:02.674 [2024-12-09 18:15:25.687812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.674 [2024-12-09 18:15:25.687893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.674 [2024-12-09 18:15:25.687917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.674 [2024-12-09 18:15:25.687931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.674 [2024-12-09 18:15:25.687942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.674 [2024-12-09 18:15:25.687970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.674 qpair failed and we were unable to recover it. 00:26:02.674 [2024-12-09 18:15:25.697975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.674 [2024-12-09 18:15:25.698070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.674 [2024-12-09 18:15:25.698095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.674 [2024-12-09 18:15:25.698109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.674 [2024-12-09 18:15:25.698120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.674 [2024-12-09 18:15:25.698148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.674 qpair failed and we were unable to recover it. 00:26:02.674 [2024-12-09 18:15:25.707913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.674 [2024-12-09 18:15:25.708006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.674 [2024-12-09 18:15:25.708041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.674 [2024-12-09 18:15:25.708058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.674 [2024-12-09 18:15:25.708070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.674 [2024-12-09 18:15:25.708100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.674 qpair failed and we were unable to recover it. 00:26:02.933 [2024-12-09 18:15:25.717986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.933 [2024-12-09 18:15:25.718074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.933 [2024-12-09 18:15:25.718101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.933 [2024-12-09 18:15:25.718116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.933 [2024-12-09 18:15:25.718128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.933 [2024-12-09 18:15:25.718157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.933 qpair failed and we were unable to recover it. 00:26:02.933 [2024-12-09 18:15:25.727924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.933 [2024-12-09 18:15:25.728013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.933 [2024-12-09 18:15:25.728042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.933 [2024-12-09 18:15:25.728067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.933 [2024-12-09 18:15:25.728088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.933 [2024-12-09 18:15:25.728128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.933 qpair failed and we were unable to recover it. 00:26:02.933 [2024-12-09 18:15:25.738002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.933 [2024-12-09 18:15:25.738114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.933 [2024-12-09 18:15:25.738140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.933 [2024-12-09 18:15:25.738154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.933 [2024-12-09 18:15:25.738166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.933 [2024-12-09 18:15:25.738195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.933 qpair failed and we were unable to recover it. 00:26:02.933 [2024-12-09 18:15:25.747975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.933 [2024-12-09 18:15:25.748061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.933 [2024-12-09 18:15:25.748086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.933 [2024-12-09 18:15:25.748101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.933 [2024-12-09 18:15:25.748118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.933 [2024-12-09 18:15:25.748147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.933 qpair failed and we were unable to recover it. 00:26:02.933 [2024-12-09 18:15:25.758050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.933 [2024-12-09 18:15:25.758165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.933 [2024-12-09 18:15:25.758191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.933 [2024-12-09 18:15:25.758204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.933 [2024-12-09 18:15:25.758216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.933 [2024-12-09 18:15:25.758244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.933 qpair failed and we were unable to recover it. 00:26:02.933 [2024-12-09 18:15:25.768117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.933 [2024-12-09 18:15:25.768198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.933 [2024-12-09 18:15:25.768224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.933 [2024-12-09 18:15:25.768238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.933 [2024-12-09 18:15:25.768249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.933 [2024-12-09 18:15:25.768278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.933 qpair failed and we were unable to recover it. 00:26:02.933 [2024-12-09 18:15:25.778077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.933 [2024-12-09 18:15:25.778188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.933 [2024-12-09 18:15:25.778214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.933 [2024-12-09 18:15:25.778228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.933 [2024-12-09 18:15:25.778239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.933 [2024-12-09 18:15:25.778268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.933 qpair failed and we were unable to recover it. 00:26:02.933 [2024-12-09 18:15:25.788127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.933 [2024-12-09 18:15:25.788252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.933 [2024-12-09 18:15:25.788281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.933 [2024-12-09 18:15:25.788297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.934 [2024-12-09 18:15:25.788309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.934 [2024-12-09 18:15:25.788337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.934 qpair failed and we were unable to recover it. 00:26:02.934 [2024-12-09 18:15:25.798139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.934 [2024-12-09 18:15:25.798224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.934 [2024-12-09 18:15:25.798249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.934 [2024-12-09 18:15:25.798263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.934 [2024-12-09 18:15:25.798275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.934 [2024-12-09 18:15:25.798304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.934 qpair failed and we were unable to recover it. 00:26:02.934 [2024-12-09 18:15:25.808140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.934 [2024-12-09 18:15:25.808223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.934 [2024-12-09 18:15:25.808248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.934 [2024-12-09 18:15:25.808262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.934 [2024-12-09 18:15:25.808274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.934 [2024-12-09 18:15:25.808301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.934 qpair failed and we were unable to recover it. 00:26:02.934 [2024-12-09 18:15:25.818239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.934 [2024-12-09 18:15:25.818326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.934 [2024-12-09 18:15:25.818352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.934 [2024-12-09 18:15:25.818366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.934 [2024-12-09 18:15:25.818378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.934 [2024-12-09 18:15:25.818405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.934 qpair failed and we were unable to recover it. 00:26:02.934 [2024-12-09 18:15:25.828219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.934 [2024-12-09 18:15:25.828344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.934 [2024-12-09 18:15:25.828370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.934 [2024-12-09 18:15:25.828384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.934 [2024-12-09 18:15:25.828396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.934 [2024-12-09 18:15:25.828424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.934 qpair failed and we were unable to recover it. 00:26:02.934 [2024-12-09 18:15:25.838208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.934 [2024-12-09 18:15:25.838291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.934 [2024-12-09 18:15:25.838321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.934 [2024-12-09 18:15:25.838336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.934 [2024-12-09 18:15:25.838348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.934 [2024-12-09 18:15:25.838376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.934 qpair failed and we were unable to recover it. 00:26:02.934 [2024-12-09 18:15:25.848341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.934 [2024-12-09 18:15:25.848434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.934 [2024-12-09 18:15:25.848459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.934 [2024-12-09 18:15:25.848473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.934 [2024-12-09 18:15:25.848485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.934 [2024-12-09 18:15:25.848513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.934 qpair failed and we were unable to recover it. 00:26:02.934 [2024-12-09 18:15:25.858299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.934 [2024-12-09 18:15:25.858390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.934 [2024-12-09 18:15:25.858415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.934 [2024-12-09 18:15:25.858429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.934 [2024-12-09 18:15:25.858441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.934 [2024-12-09 18:15:25.858468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.934 qpair failed and we were unable to recover it. 00:26:02.934 [2024-12-09 18:15:25.868322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.934 [2024-12-09 18:15:25.868412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.934 [2024-12-09 18:15:25.868438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.934 [2024-12-09 18:15:25.868452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.934 [2024-12-09 18:15:25.868464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.934 [2024-12-09 18:15:25.868491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.934 qpair failed and we were unable to recover it. 00:26:02.934 [2024-12-09 18:15:25.878356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.934 [2024-12-09 18:15:25.878436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.934 [2024-12-09 18:15:25.878461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.934 [2024-12-09 18:15:25.878475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.934 [2024-12-09 18:15:25.878492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.934 [2024-12-09 18:15:25.878520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.934 qpair failed and we were unable to recover it. 00:26:02.934 [2024-12-09 18:15:25.888466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.934 [2024-12-09 18:15:25.888556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.934 [2024-12-09 18:15:25.888581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.934 [2024-12-09 18:15:25.888595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.934 [2024-12-09 18:15:25.888607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.934 [2024-12-09 18:15:25.888635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.934 qpair failed and we were unable to recover it. 00:26:02.934 [2024-12-09 18:15:25.898378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.934 [2024-12-09 18:15:25.898461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.934 [2024-12-09 18:15:25.898487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.934 [2024-12-09 18:15:25.898501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.934 [2024-12-09 18:15:25.898512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.934 [2024-12-09 18:15:25.898540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.934 qpair failed and we were unable to recover it. 00:26:02.934 [2024-12-09 18:15:25.908421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.934 [2024-12-09 18:15:25.908540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.934 [2024-12-09 18:15:25.908571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.934 [2024-12-09 18:15:25.908585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.934 [2024-12-09 18:15:25.908596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.934 [2024-12-09 18:15:25.908624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.934 qpair failed and we were unable to recover it. 00:26:02.934 [2024-12-09 18:15:25.918511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.934 [2024-12-09 18:15:25.918599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.934 [2024-12-09 18:15:25.918624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.934 [2024-12-09 18:15:25.918638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.934 [2024-12-09 18:15:25.918650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.934 [2024-12-09 18:15:25.918678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.934 qpair failed and we were unable to recover it. 00:26:02.934 [2024-12-09 18:15:25.928486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.935 [2024-12-09 18:15:25.928576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.935 [2024-12-09 18:15:25.928602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.935 [2024-12-09 18:15:25.928616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.935 [2024-12-09 18:15:25.928628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.935 [2024-12-09 18:15:25.928655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.935 qpair failed and we were unable to recover it. 00:26:02.935 [2024-12-09 18:15:25.938521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.935 [2024-12-09 18:15:25.938618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.935 [2024-12-09 18:15:25.938643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.935 [2024-12-09 18:15:25.938657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.935 [2024-12-09 18:15:25.938669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.935 [2024-12-09 18:15:25.938697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.935 qpair failed and we were unable to recover it. 00:26:02.935 [2024-12-09 18:15:25.948522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.935 [2024-12-09 18:15:25.948619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.935 [2024-12-09 18:15:25.948644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.935 [2024-12-09 18:15:25.948658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.935 [2024-12-09 18:15:25.948670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.935 [2024-12-09 18:15:25.948698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.935 qpair failed and we were unable to recover it. 00:26:02.935 [2024-12-09 18:15:25.958557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.935 [2024-12-09 18:15:25.958681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.935 [2024-12-09 18:15:25.958705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.935 [2024-12-09 18:15:25.958720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.935 [2024-12-09 18:15:25.958731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.935 [2024-12-09 18:15:25.958759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.935 qpair failed and we were unable to recover it. 00:26:02.935 [2024-12-09 18:15:25.968588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.935 [2024-12-09 18:15:25.968676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.935 [2024-12-09 18:15:25.968707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.935 [2024-12-09 18:15:25.968722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.935 [2024-12-09 18:15:25.968735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:02.935 [2024-12-09 18:15:25.968763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.935 qpair failed and we were unable to recover it. 00:26:03.194 [2024-12-09 18:15:25.978630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.194 [2024-12-09 18:15:25.978720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.194 [2024-12-09 18:15:25.978750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.194 [2024-12-09 18:15:25.978765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.194 [2024-12-09 18:15:25.978776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.194 [2024-12-09 18:15:25.978805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-12-09 18:15:25.988638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.194 [2024-12-09 18:15:25.988725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.194 [2024-12-09 18:15:25.988753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.194 [2024-12-09 18:15:25.988767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.194 [2024-12-09 18:15:25.988779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.194 [2024-12-09 18:15:25.988808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-12-09 18:15:25.998652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.194 [2024-12-09 18:15:25.998732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.194 [2024-12-09 18:15:25.998758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.194 [2024-12-09 18:15:25.998772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.195 [2024-12-09 18:15:25.998784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.195 [2024-12-09 18:15:25.998812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-12-09 18:15:26.008681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.195 [2024-12-09 18:15:26.008758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.195 [2024-12-09 18:15:26.008783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.195 [2024-12-09 18:15:26.008797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.195 [2024-12-09 18:15:26.008818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.195 [2024-12-09 18:15:26.008847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-12-09 18:15:26.018740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.195 [2024-12-09 18:15:26.018831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.195 [2024-12-09 18:15:26.018856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.195 [2024-12-09 18:15:26.018870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.195 [2024-12-09 18:15:26.018881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.195 [2024-12-09 18:15:26.018909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-12-09 18:15:26.028737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.195 [2024-12-09 18:15:26.028829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.195 [2024-12-09 18:15:26.028854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.195 [2024-12-09 18:15:26.028868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.195 [2024-12-09 18:15:26.028881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.195 [2024-12-09 18:15:26.028909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-12-09 18:15:26.038771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.195 [2024-12-09 18:15:26.038858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.195 [2024-12-09 18:15:26.038883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.195 [2024-12-09 18:15:26.038897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.195 [2024-12-09 18:15:26.038909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.195 [2024-12-09 18:15:26.038937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-12-09 18:15:26.048823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.195 [2024-12-09 18:15:26.048906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.195 [2024-12-09 18:15:26.048931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.195 [2024-12-09 18:15:26.048945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.195 [2024-12-09 18:15:26.048957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.195 [2024-12-09 18:15:26.048985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-12-09 18:15:26.058941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.195 [2024-12-09 18:15:26.059034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.195 [2024-12-09 18:15:26.059059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.195 [2024-12-09 18:15:26.059073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.195 [2024-12-09 18:15:26.059084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.195 [2024-12-09 18:15:26.059112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-12-09 18:15:26.068860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.195 [2024-12-09 18:15:26.068965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.195 [2024-12-09 18:15:26.068990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.195 [2024-12-09 18:15:26.069004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.195 [2024-12-09 18:15:26.069016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.195 [2024-12-09 18:15:26.069044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-12-09 18:15:26.078871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.195 [2024-12-09 18:15:26.078987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.195 [2024-12-09 18:15:26.079012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.195 [2024-12-09 18:15:26.079026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.195 [2024-12-09 18:15:26.079038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.195 [2024-12-09 18:15:26.079065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-12-09 18:15:26.088897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.195 [2024-12-09 18:15:26.088993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.195 [2024-12-09 18:15:26.089018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.195 [2024-12-09 18:15:26.089033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.195 [2024-12-09 18:15:26.089045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.195 [2024-12-09 18:15:26.089072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-12-09 18:15:26.098961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.195 [2024-12-09 18:15:26.099080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.195 [2024-12-09 18:15:26.099110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.195 [2024-12-09 18:15:26.099125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.195 [2024-12-09 18:15:26.099137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.195 [2024-12-09 18:15:26.099164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-12-09 18:15:26.109061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.195 [2024-12-09 18:15:26.109150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.195 [2024-12-09 18:15:26.109175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.195 [2024-12-09 18:15:26.109189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.195 [2024-12-09 18:15:26.109201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.195 [2024-12-09 18:15:26.109229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-12-09 18:15:26.119014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.195 [2024-12-09 18:15:26.119094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.195 [2024-12-09 18:15:26.119119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.195 [2024-12-09 18:15:26.119132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.195 [2024-12-09 18:15:26.119144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.195 [2024-12-09 18:15:26.119172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-12-09 18:15:26.129021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.195 [2024-12-09 18:15:26.129103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.195 [2024-12-09 18:15:26.129129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.195 [2024-12-09 18:15:26.129143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.195 [2024-12-09 18:15:26.129154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.195 [2024-12-09 18:15:26.129182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-12-09 18:15:26.139059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.196 [2024-12-09 18:15:26.139187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.196 [2024-12-09 18:15:26.139212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.196 [2024-12-09 18:15:26.139226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.196 [2024-12-09 18:15:26.139244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.196 [2024-12-09 18:15:26.139273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-12-09 18:15:26.149137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.196 [2024-12-09 18:15:26.149229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.196 [2024-12-09 18:15:26.149254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.196 [2024-12-09 18:15:26.149269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.196 [2024-12-09 18:15:26.149280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.196 [2024-12-09 18:15:26.149308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-12-09 18:15:26.159121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.196 [2024-12-09 18:15:26.159205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.196 [2024-12-09 18:15:26.159230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.196 [2024-12-09 18:15:26.159243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.196 [2024-12-09 18:15:26.159256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.196 [2024-12-09 18:15:26.159284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-12-09 18:15:26.169162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.196 [2024-12-09 18:15:26.169273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.196 [2024-12-09 18:15:26.169302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.196 [2024-12-09 18:15:26.169318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.196 [2024-12-09 18:15:26.169331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.196 [2024-12-09 18:15:26.169359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-12-09 18:15:26.179267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.196 [2024-12-09 18:15:26.179358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.196 [2024-12-09 18:15:26.179384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.196 [2024-12-09 18:15:26.179398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.196 [2024-12-09 18:15:26.179410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.196 [2024-12-09 18:15:26.179438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-12-09 18:15:26.189235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.196 [2024-12-09 18:15:26.189320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.196 [2024-12-09 18:15:26.189346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.196 [2024-12-09 18:15:26.189359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.196 [2024-12-09 18:15:26.189371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.196 [2024-12-09 18:15:26.189399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-12-09 18:15:26.199226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.196 [2024-12-09 18:15:26.199318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.196 [2024-12-09 18:15:26.199345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.196 [2024-12-09 18:15:26.199359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.196 [2024-12-09 18:15:26.199370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.196 [2024-12-09 18:15:26.199398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-12-09 18:15:26.209241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.196 [2024-12-09 18:15:26.209331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.196 [2024-12-09 18:15:26.209357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.196 [2024-12-09 18:15:26.209371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.196 [2024-12-09 18:15:26.209382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.196 [2024-12-09 18:15:26.209409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-12-09 18:15:26.219280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.196 [2024-12-09 18:15:26.219370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.196 [2024-12-09 18:15:26.219404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.196 [2024-12-09 18:15:26.219418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.196 [2024-12-09 18:15:26.219430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.196 [2024-12-09 18:15:26.219457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-12-09 18:15:26.229329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.196 [2024-12-09 18:15:26.229417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.196 [2024-12-09 18:15:26.229453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.196 [2024-12-09 18:15:26.229478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.196 [2024-12-09 18:15:26.229499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.196 [2024-12-09 18:15:26.229540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.457 [2024-12-09 18:15:26.239337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.457 [2024-12-09 18:15:26.239429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.457 [2024-12-09 18:15:26.239457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.457 [2024-12-09 18:15:26.239471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.457 [2024-12-09 18:15:26.239483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.457 [2024-12-09 18:15:26.239512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.457 qpair failed and we were unable to recover it. 00:26:03.457 [2024-12-09 18:15:26.249378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.457 [2024-12-09 18:15:26.249498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.457 [2024-12-09 18:15:26.249527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.457 [2024-12-09 18:15:26.249542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.457 [2024-12-09 18:15:26.249566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.457 [2024-12-09 18:15:26.249596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.457 qpair failed and we were unable to recover it. 00:26:03.457 [2024-12-09 18:15:26.259389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.457 [2024-12-09 18:15:26.259488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.457 [2024-12-09 18:15:26.259513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.457 [2024-12-09 18:15:26.259526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.457 [2024-12-09 18:15:26.259538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.457 [2024-12-09 18:15:26.259576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.457 qpair failed and we were unable to recover it. 00:26:03.458 [2024-12-09 18:15:26.269433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.458 [2024-12-09 18:15:26.269542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.458 [2024-12-09 18:15:26.269582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.458 [2024-12-09 18:15:26.269598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.458 [2024-12-09 18:15:26.269616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.458 [2024-12-09 18:15:26.269646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.458 qpair failed and we were unable to recover it. 00:26:03.458 [2024-12-09 18:15:26.279452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.458 [2024-12-09 18:15:26.279534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.458 [2024-12-09 18:15:26.279568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.458 [2024-12-09 18:15:26.279583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.458 [2024-12-09 18:15:26.279595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.458 [2024-12-09 18:15:26.279623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.458 qpair failed and we were unable to recover it. 00:26:03.458 [2024-12-09 18:15:26.289500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.458 [2024-12-09 18:15:26.289596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.458 [2024-12-09 18:15:26.289621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.458 [2024-12-09 18:15:26.289636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.458 [2024-12-09 18:15:26.289647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.458 [2024-12-09 18:15:26.289676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.458 qpair failed and we were unable to recover it. 00:26:03.458 [2024-12-09 18:15:26.299519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.458 [2024-12-09 18:15:26.299630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.458 [2024-12-09 18:15:26.299656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.458 [2024-12-09 18:15:26.299670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.458 [2024-12-09 18:15:26.299681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.458 [2024-12-09 18:15:26.299709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.458 qpair failed and we were unable to recover it. 00:26:03.458 [2024-12-09 18:15:26.309651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.458 [2024-12-09 18:15:26.309738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.458 [2024-12-09 18:15:26.309764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.458 [2024-12-09 18:15:26.309777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.458 [2024-12-09 18:15:26.309789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.458 [2024-12-09 18:15:26.309817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.458 qpair failed and we were unable to recover it. 00:26:03.458 [2024-12-09 18:15:26.319584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.458 [2024-12-09 18:15:26.319702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.458 [2024-12-09 18:15:26.319727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.458 [2024-12-09 18:15:26.319741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.458 [2024-12-09 18:15:26.319752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.458 [2024-12-09 18:15:26.319781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.458 qpair failed and we were unable to recover it. 00:26:03.458 [2024-12-09 18:15:26.329619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.458 [2024-12-09 18:15:26.329712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.458 [2024-12-09 18:15:26.329741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.458 [2024-12-09 18:15:26.329757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.458 [2024-12-09 18:15:26.329769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.458 [2024-12-09 18:15:26.329798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.458 qpair failed and we were unable to recover it. 00:26:03.458 [2024-12-09 18:15:26.339617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.458 [2024-12-09 18:15:26.339707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.458 [2024-12-09 18:15:26.339732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.458 [2024-12-09 18:15:26.339746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.458 [2024-12-09 18:15:26.339758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.458 [2024-12-09 18:15:26.339786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.458 qpair failed and we were unable to recover it. 00:26:03.458 [2024-12-09 18:15:26.349663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.458 [2024-12-09 18:15:26.349791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.458 [2024-12-09 18:15:26.349816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.458 [2024-12-09 18:15:26.349830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.458 [2024-12-09 18:15:26.349841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.458 [2024-12-09 18:15:26.349870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.458 qpair failed and we were unable to recover it. 00:26:03.458 [2024-12-09 18:15:26.359707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.458 [2024-12-09 18:15:26.359794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.458 [2024-12-09 18:15:26.359826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.458 [2024-12-09 18:15:26.359841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.458 [2024-12-09 18:15:26.359853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.458 [2024-12-09 18:15:26.359881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.458 qpair failed and we were unable to recover it. 00:26:03.458 [2024-12-09 18:15:26.369727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.458 [2024-12-09 18:15:26.369854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.458 [2024-12-09 18:15:26.369880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.458 [2024-12-09 18:15:26.369894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.458 [2024-12-09 18:15:26.369906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.458 [2024-12-09 18:15:26.369933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.458 qpair failed and we were unable to recover it. 00:26:03.458 [2024-12-09 18:15:26.379776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.458 [2024-12-09 18:15:26.379867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.458 [2024-12-09 18:15:26.379891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.458 [2024-12-09 18:15:26.379905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.458 [2024-12-09 18:15:26.379918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.458 [2024-12-09 18:15:26.379945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.458 qpair failed and we were unable to recover it. 00:26:03.458 [2024-12-09 18:15:26.389756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.458 [2024-12-09 18:15:26.389840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.458 [2024-12-09 18:15:26.389866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.458 [2024-12-09 18:15:26.389880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.458 [2024-12-09 18:15:26.389891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.458 [2024-12-09 18:15:26.389919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.458 qpair failed and we were unable to recover it. 00:26:03.458 [2024-12-09 18:15:26.399772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.458 [2024-12-09 18:15:26.399854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.458 [2024-12-09 18:15:26.399879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.458 [2024-12-09 18:15:26.399892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.459 [2024-12-09 18:15:26.399910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.459 [2024-12-09 18:15:26.399939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.459 qpair failed and we were unable to recover it. 00:26:03.459 [2024-12-09 18:15:26.409825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.459 [2024-12-09 18:15:26.409907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.459 [2024-12-09 18:15:26.409932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.459 [2024-12-09 18:15:26.409946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.459 [2024-12-09 18:15:26.409958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.459 [2024-12-09 18:15:26.409986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.459 qpair failed and we were unable to recover it. 00:26:03.459 [2024-12-09 18:15:26.419923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.459 [2024-12-09 18:15:26.420011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.459 [2024-12-09 18:15:26.420037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.459 [2024-12-09 18:15:26.420050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.459 [2024-12-09 18:15:26.420062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.459 [2024-12-09 18:15:26.420089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.459 qpair failed and we were unable to recover it. 00:26:03.459 [2024-12-09 18:15:26.429865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.459 [2024-12-09 18:15:26.429985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.459 [2024-12-09 18:15:26.430010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.459 [2024-12-09 18:15:26.430024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.459 [2024-12-09 18:15:26.430036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.459 [2024-12-09 18:15:26.430064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.459 qpair failed and we were unable to recover it. 00:26:03.459 [2024-12-09 18:15:26.439949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.459 [2024-12-09 18:15:26.440037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.459 [2024-12-09 18:15:26.440062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.459 [2024-12-09 18:15:26.440076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.459 [2024-12-09 18:15:26.440088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.459 [2024-12-09 18:15:26.440115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.459 qpair failed and we were unable to recover it. 00:26:03.459 [2024-12-09 18:15:26.450065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.459 [2024-12-09 18:15:26.450153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.459 [2024-12-09 18:15:26.450179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.459 [2024-12-09 18:15:26.450192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.459 [2024-12-09 18:15:26.450205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.459 [2024-12-09 18:15:26.450232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.459 qpair failed and we were unable to recover it. 00:26:03.459 [2024-12-09 18:15:26.460018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.459 [2024-12-09 18:15:26.460125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.459 [2024-12-09 18:15:26.460150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.459 [2024-12-09 18:15:26.460164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.459 [2024-12-09 18:15:26.460176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.459 [2024-12-09 18:15:26.460203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.459 qpair failed and we were unable to recover it. 00:26:03.459 [2024-12-09 18:15:26.470037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.459 [2024-12-09 18:15:26.470128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.459 [2024-12-09 18:15:26.470156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.459 [2024-12-09 18:15:26.470170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.459 [2024-12-09 18:15:26.470182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.459 [2024-12-09 18:15:26.470211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.459 qpair failed and we were unable to recover it. 00:26:03.459 [2024-12-09 18:15:26.480096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.459 [2024-12-09 18:15:26.480177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.459 [2024-12-09 18:15:26.480201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.459 [2024-12-09 18:15:26.480222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.459 [2024-12-09 18:15:26.480242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.459 [2024-12-09 18:15:26.480282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.459 qpair failed and we were unable to recover it. 00:26:03.459 [2024-12-09 18:15:26.490077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.459 [2024-12-09 18:15:26.490165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.459 [2024-12-09 18:15:26.490199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.459 [2024-12-09 18:15:26.490214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.459 [2024-12-09 18:15:26.490226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.459 [2024-12-09 18:15:26.490255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.459 qpair failed and we were unable to recover it. 00:26:03.721 [2024-12-09 18:15:26.500074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.721 [2024-12-09 18:15:26.500169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.721 [2024-12-09 18:15:26.500196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.721 [2024-12-09 18:15:26.500210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.721 [2024-12-09 18:15:26.500222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.721 [2024-12-09 18:15:26.500250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.721 qpair failed and we were unable to recover it. 00:26:03.721 [2024-12-09 18:15:26.510136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.721 [2024-12-09 18:15:26.510224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.721 [2024-12-09 18:15:26.510250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.721 [2024-12-09 18:15:26.510264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.721 [2024-12-09 18:15:26.510276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.721 [2024-12-09 18:15:26.510304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.722 qpair failed and we were unable to recover it. 00:26:03.722 [2024-12-09 18:15:26.520152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.722 [2024-12-09 18:15:26.520243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.722 [2024-12-09 18:15:26.520269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.722 [2024-12-09 18:15:26.520283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.722 [2024-12-09 18:15:26.520295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.722 [2024-12-09 18:15:26.520323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.722 qpair failed and we were unable to recover it. 00:26:03.722 [2024-12-09 18:15:26.530171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.722 [2024-12-09 18:15:26.530287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.722 [2024-12-09 18:15:26.530312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.722 [2024-12-09 18:15:26.530326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.722 [2024-12-09 18:15:26.530344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.722 [2024-12-09 18:15:26.530373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.722 qpair failed and we were unable to recover it. 00:26:03.722 [2024-12-09 18:15:26.540213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.722 [2024-12-09 18:15:26.540301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.722 [2024-12-09 18:15:26.540326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.722 [2024-12-09 18:15:26.540340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.722 [2024-12-09 18:15:26.540352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.722 [2024-12-09 18:15:26.540379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.722 qpair failed and we were unable to recover it. 00:26:03.722 [2024-12-09 18:15:26.550212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.722 [2024-12-09 18:15:26.550339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.722 [2024-12-09 18:15:26.550364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.722 [2024-12-09 18:15:26.550378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.722 [2024-12-09 18:15:26.550390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.722 [2024-12-09 18:15:26.550418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.722 qpair failed and we were unable to recover it. 00:26:03.722 [2024-12-09 18:15:26.560243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.722 [2024-12-09 18:15:26.560327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.722 [2024-12-09 18:15:26.560352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.722 [2024-12-09 18:15:26.560366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.722 [2024-12-09 18:15:26.560377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.722 [2024-12-09 18:15:26.560405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.722 qpair failed and we were unable to recover it. 00:26:03.722 [2024-12-09 18:15:26.570264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.722 [2024-12-09 18:15:26.570350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.722 [2024-12-09 18:15:26.570376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.722 [2024-12-09 18:15:26.570390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.722 [2024-12-09 18:15:26.570402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.722 [2024-12-09 18:15:26.570430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.722 qpair failed and we were unable to recover it. 00:26:03.722 [2024-12-09 18:15:26.580335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.722 [2024-12-09 18:15:26.580432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.722 [2024-12-09 18:15:26.580458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.722 [2024-12-09 18:15:26.580473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.722 [2024-12-09 18:15:26.580485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.722 [2024-12-09 18:15:26.580514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.722 qpair failed and we were unable to recover it. 00:26:03.722 [2024-12-09 18:15:26.590330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.722 [2024-12-09 18:15:26.590457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.722 [2024-12-09 18:15:26.590482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.722 [2024-12-09 18:15:26.590496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.722 [2024-12-09 18:15:26.590508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.722 [2024-12-09 18:15:26.590536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.722 qpair failed and we were unable to recover it. 00:26:03.722 [2024-12-09 18:15:26.600358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.722 [2024-12-09 18:15:26.600488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.722 [2024-12-09 18:15:26.600513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.722 [2024-12-09 18:15:26.600527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.722 [2024-12-09 18:15:26.600539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.722 [2024-12-09 18:15:26.600575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.722 qpair failed and we were unable to recover it. 00:26:03.722 [2024-12-09 18:15:26.610376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.722 [2024-12-09 18:15:26.610456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.722 [2024-12-09 18:15:26.610481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.722 [2024-12-09 18:15:26.610495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.722 [2024-12-09 18:15:26.610507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.722 [2024-12-09 18:15:26.610535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.722 qpair failed and we were unable to recover it. 00:26:03.722 [2024-12-09 18:15:26.620430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.722 [2024-12-09 18:15:26.620519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.722 [2024-12-09 18:15:26.620560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.722 [2024-12-09 18:15:26.620578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.722 [2024-12-09 18:15:26.620590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.722 [2024-12-09 18:15:26.620619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.722 qpair failed and we were unable to recover it. 00:26:03.722 [2024-12-09 18:15:26.630443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.722 [2024-12-09 18:15:26.630532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.722 [2024-12-09 18:15:26.630568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.722 [2024-12-09 18:15:26.630584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.722 [2024-12-09 18:15:26.630595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.722 [2024-12-09 18:15:26.630623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.722 qpair failed and we were unable to recover it. 00:26:03.722 [2024-12-09 18:15:26.640470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.722 [2024-12-09 18:15:26.640582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.722 [2024-12-09 18:15:26.640607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.722 [2024-12-09 18:15:26.640622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.722 [2024-12-09 18:15:26.640634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.722 [2024-12-09 18:15:26.640662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.722 qpair failed and we were unable to recover it. 00:26:03.722 [2024-12-09 18:15:26.650503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.723 [2024-12-09 18:15:26.650596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.723 [2024-12-09 18:15:26.650622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.723 [2024-12-09 18:15:26.650636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.723 [2024-12-09 18:15:26.650648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.723 [2024-12-09 18:15:26.650675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.723 qpair failed and we were unable to recover it. 00:26:03.723 [2024-12-09 18:15:26.660563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.723 [2024-12-09 18:15:26.660651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.723 [2024-12-09 18:15:26.660676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.723 [2024-12-09 18:15:26.660689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.723 [2024-12-09 18:15:26.660710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.723 [2024-12-09 18:15:26.660739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.723 qpair failed and we were unable to recover it. 00:26:03.723 [2024-12-09 18:15:26.670569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.723 [2024-12-09 18:15:26.670651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.723 [2024-12-09 18:15:26.670676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.723 [2024-12-09 18:15:26.670690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.723 [2024-12-09 18:15:26.670702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.723 [2024-12-09 18:15:26.670730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.723 qpair failed and we were unable to recover it. 00:26:03.723 [2024-12-09 18:15:26.680580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.723 [2024-12-09 18:15:26.680661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.723 [2024-12-09 18:15:26.680687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.723 [2024-12-09 18:15:26.680700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.723 [2024-12-09 18:15:26.680712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.723 [2024-12-09 18:15:26.680741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.723 qpair failed and we were unable to recover it. 00:26:03.723 [2024-12-09 18:15:26.690629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.723 [2024-12-09 18:15:26.690736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.723 [2024-12-09 18:15:26.690761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.723 [2024-12-09 18:15:26.690776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.723 [2024-12-09 18:15:26.690787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.723 [2024-12-09 18:15:26.690815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.723 qpair failed and we were unable to recover it. 00:26:03.723 [2024-12-09 18:15:26.700786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.723 [2024-12-09 18:15:26.700905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.723 [2024-12-09 18:15:26.700930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.723 [2024-12-09 18:15:26.700944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.723 [2024-12-09 18:15:26.700956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.723 [2024-12-09 18:15:26.700983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.723 qpair failed and we were unable to recover it. 00:26:03.723 [2024-12-09 18:15:26.710700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.723 [2024-12-09 18:15:26.710786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.723 [2024-12-09 18:15:26.710811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.723 [2024-12-09 18:15:26.710825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.723 [2024-12-09 18:15:26.710837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.723 [2024-12-09 18:15:26.710864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.723 qpair failed and we were unable to recover it. 00:26:03.723 [2024-12-09 18:15:26.720697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.723 [2024-12-09 18:15:26.720780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.723 [2024-12-09 18:15:26.720806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.723 [2024-12-09 18:15:26.720820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.723 [2024-12-09 18:15:26.720832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.723 [2024-12-09 18:15:26.720860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.723 qpair failed and we were unable to recover it. 00:26:03.723 [2024-12-09 18:15:26.730844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.723 [2024-12-09 18:15:26.730944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.723 [2024-12-09 18:15:26.730969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.723 [2024-12-09 18:15:26.730982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.723 [2024-12-09 18:15:26.730994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.723 [2024-12-09 18:15:26.731022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.723 qpair failed and we were unable to recover it. 00:26:03.723 [2024-12-09 18:15:26.740791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.723 [2024-12-09 18:15:26.740887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.723 [2024-12-09 18:15:26.740913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.723 [2024-12-09 18:15:26.740927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.723 [2024-12-09 18:15:26.740939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.723 [2024-12-09 18:15:26.740967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.723 qpair failed and we were unable to recover it. 00:26:03.723 [2024-12-09 18:15:26.750792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.723 [2024-12-09 18:15:26.750879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.723 [2024-12-09 18:15:26.750909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.723 [2024-12-09 18:15:26.750924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.723 [2024-12-09 18:15:26.750936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.723 [2024-12-09 18:15:26.750964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.723 qpair failed and we were unable to recover it. 00:26:03.983 [2024-12-09 18:15:26.760812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.983 [2024-12-09 18:15:26.760893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.983 [2024-12-09 18:15:26.760920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.983 [2024-12-09 18:15:26.760934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.983 [2024-12-09 18:15:26.760946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.983 [2024-12-09 18:15:26.760975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.983 qpair failed and we were unable to recover it. 00:26:03.983 [2024-12-09 18:15:26.770876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.983 [2024-12-09 18:15:26.770959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.983 [2024-12-09 18:15:26.770987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.984 [2024-12-09 18:15:26.771001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.984 [2024-12-09 18:15:26.771013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.984 [2024-12-09 18:15:26.771041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.984 qpair failed and we were unable to recover it. 00:26:03.984 [2024-12-09 18:15:26.780899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.984 [2024-12-09 18:15:26.780991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.984 [2024-12-09 18:15:26.781017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.984 [2024-12-09 18:15:26.781031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.984 [2024-12-09 18:15:26.781043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.984 [2024-12-09 18:15:26.781070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.984 qpair failed and we were unable to recover it. 00:26:03.984 [2024-12-09 18:15:26.790960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.984 [2024-12-09 18:15:26.791085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.984 [2024-12-09 18:15:26.791110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.984 [2024-12-09 18:15:26.791124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.984 [2024-12-09 18:15:26.791142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.984 [2024-12-09 18:15:26.791171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.984 qpair failed and we were unable to recover it. 00:26:03.984 [2024-12-09 18:15:26.800925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.984 [2024-12-09 18:15:26.801010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.984 [2024-12-09 18:15:26.801036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.984 [2024-12-09 18:15:26.801050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.984 [2024-12-09 18:15:26.801062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.984 [2024-12-09 18:15:26.801089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.984 qpair failed and we were unable to recover it. 00:26:03.984 [2024-12-09 18:15:26.810989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.984 [2024-12-09 18:15:26.811099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.984 [2024-12-09 18:15:26.811125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.984 [2024-12-09 18:15:26.811139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.984 [2024-12-09 18:15:26.811151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.984 [2024-12-09 18:15:26.811178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.984 qpair failed and we were unable to recover it. 00:26:03.984 [2024-12-09 18:15:26.821008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.984 [2024-12-09 18:15:26.821099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.984 [2024-12-09 18:15:26.821124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.984 [2024-12-09 18:15:26.821138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.984 [2024-12-09 18:15:26.821150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.984 [2024-12-09 18:15:26.821177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.984 qpair failed and we were unable to recover it. 00:26:03.984 [2024-12-09 18:15:26.831054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.984 [2024-12-09 18:15:26.831172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.984 [2024-12-09 18:15:26.831197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.984 [2024-12-09 18:15:26.831211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.984 [2024-12-09 18:15:26.831223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.984 [2024-12-09 18:15:26.831251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.984 qpair failed and we were unable to recover it. 00:26:03.984 [2024-12-09 18:15:26.841095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.984 [2024-12-09 18:15:26.841206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.984 [2024-12-09 18:15:26.841232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.984 [2024-12-09 18:15:26.841246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.984 [2024-12-09 18:15:26.841258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.984 [2024-12-09 18:15:26.841286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.984 qpair failed and we were unable to recover it. 00:26:03.984 [2024-12-09 18:15:26.851171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.984 [2024-12-09 18:15:26.851258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.984 [2024-12-09 18:15:26.851283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.984 [2024-12-09 18:15:26.851298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.984 [2024-12-09 18:15:26.851310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.984 [2024-12-09 18:15:26.851338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.984 qpair failed and we were unable to recover it. 00:26:03.984 [2024-12-09 18:15:26.861118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.984 [2024-12-09 18:15:26.861211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.984 [2024-12-09 18:15:26.861236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.984 [2024-12-09 18:15:26.861250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.984 [2024-12-09 18:15:26.861263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.984 [2024-12-09 18:15:26.861290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.984 qpair failed and we were unable to recover it. 00:26:03.984 [2024-12-09 18:15:26.871140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.984 [2024-12-09 18:15:26.871226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.984 [2024-12-09 18:15:26.871251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.984 [2024-12-09 18:15:26.871265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.984 [2024-12-09 18:15:26.871277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.984 [2024-12-09 18:15:26.871305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.984 qpair failed and we were unable to recover it. 00:26:03.984 [2024-12-09 18:15:26.881180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.984 [2024-12-09 18:15:26.881305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.984 [2024-12-09 18:15:26.881335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.984 [2024-12-09 18:15:26.881349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.984 [2024-12-09 18:15:26.881361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.984 [2024-12-09 18:15:26.881389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.984 qpair failed and we were unable to recover it. 00:26:03.984 [2024-12-09 18:15:26.891218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.984 [2024-12-09 18:15:26.891302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.984 [2024-12-09 18:15:26.891328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.984 [2024-12-09 18:15:26.891342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.984 [2024-12-09 18:15:26.891354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.984 [2024-12-09 18:15:26.891382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.984 qpair failed and we were unable to recover it. 00:26:03.984 [2024-12-09 18:15:26.901244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.984 [2024-12-09 18:15:26.901334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.984 [2024-12-09 18:15:26.901360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.984 [2024-12-09 18:15:26.901374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.984 [2024-12-09 18:15:26.901386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.985 [2024-12-09 18:15:26.901414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.985 qpair failed and we were unable to recover it. 00:26:03.985 [2024-12-09 18:15:26.911307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.985 [2024-12-09 18:15:26.911432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.985 [2024-12-09 18:15:26.911458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.985 [2024-12-09 18:15:26.911472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.985 [2024-12-09 18:15:26.911484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.985 [2024-12-09 18:15:26.911511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.985 qpair failed and we were unable to recover it. 00:26:03.985 [2024-12-09 18:15:26.921336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.985 [2024-12-09 18:15:26.921453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.985 [2024-12-09 18:15:26.921479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.985 [2024-12-09 18:15:26.921493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.985 [2024-12-09 18:15:26.921511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.985 [2024-12-09 18:15:26.921540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.985 qpair failed and we were unable to recover it. 00:26:03.985 [2024-12-09 18:15:26.931343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.985 [2024-12-09 18:15:26.931428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.985 [2024-12-09 18:15:26.931453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.985 [2024-12-09 18:15:26.931467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.985 [2024-12-09 18:15:26.931480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.985 [2024-12-09 18:15:26.931508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.985 qpair failed and we were unable to recover it. 00:26:03.985 [2024-12-09 18:15:26.941375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.985 [2024-12-09 18:15:26.941468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.985 [2024-12-09 18:15:26.941493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.985 [2024-12-09 18:15:26.941507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.985 [2024-12-09 18:15:26.941519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.985 [2024-12-09 18:15:26.941555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.985 qpair failed and we were unable to recover it. 00:26:03.985 [2024-12-09 18:15:26.951382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.985 [2024-12-09 18:15:26.951510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.985 [2024-12-09 18:15:26.951535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.985 [2024-12-09 18:15:26.951557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.985 [2024-12-09 18:15:26.951569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.985 [2024-12-09 18:15:26.951597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.985 qpair failed and we were unable to recover it. 00:26:03.985 [2024-12-09 18:15:26.961401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.985 [2024-12-09 18:15:26.961482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.985 [2024-12-09 18:15:26.961507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.985 [2024-12-09 18:15:26.961521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.985 [2024-12-09 18:15:26.961533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.985 [2024-12-09 18:15:26.961567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.985 qpair failed and we were unable to recover it. 00:26:03.985 [2024-12-09 18:15:26.971424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.985 [2024-12-09 18:15:26.971504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.985 [2024-12-09 18:15:26.971530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.985 [2024-12-09 18:15:26.971551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.985 [2024-12-09 18:15:26.971581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.985 [2024-12-09 18:15:26.971612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.985 qpair failed and we were unable to recover it. 00:26:03.985 [2024-12-09 18:15:26.981475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.985 [2024-12-09 18:15:26.981573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.985 [2024-12-09 18:15:26.981599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.985 [2024-12-09 18:15:26.981623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.985 [2024-12-09 18:15:26.981643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.985 [2024-12-09 18:15:26.981683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.985 qpair failed and we were unable to recover it. 00:26:03.985 [2024-12-09 18:15:26.991599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.985 [2024-12-09 18:15:26.991696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.985 [2024-12-09 18:15:26.991724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.985 [2024-12-09 18:15:26.991739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.985 [2024-12-09 18:15:26.991751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20aefa0 00:26:03.985 [2024-12-09 18:15:26.991780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.985 qpair failed and we were unable to recover it. 00:26:03.985 [2024-12-09 18:15:26.991921] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:26:03.985 A controller has encountered a failure and is being reset. 00:26:03.985 Controller properly reset. 00:26:04.243 Initializing NVMe Controllers 00:26:04.243 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:04.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:04.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:04.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:04.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:04.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:04.243 Initialization complete. Launching workers. 00:26:04.243 Starting thread on core 1 00:26:04.243 Starting thread on core 2 00:26:04.243 Starting thread on core 3 00:26:04.243 Starting thread on core 0 00:26:04.243 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:04.243 00:26:04.243 real 0m10.842s 00:26:04.243 user 0m18.783s 00:26:04.243 sys 0m5.055s 00:26:04.243 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:04.243 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:04.243 ************************************ 00:26:04.243 END TEST nvmf_target_disconnect_tc2 00:26:04.243 ************************************ 00:26:04.243 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:04.243 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:04.243 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:04.243 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:04.243 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:26:04.243 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:04.243 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:26:04.243 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:04.243 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:04.243 rmmod nvme_tcp 00:26:04.243 rmmod nvme_fabrics 00:26:04.243 rmmod nvme_keyring 00:26:04.243 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:04.243 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:26:04.243 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:26:04.243 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1583339 ']' 00:26:04.243 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1583339 00:26:04.243 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1583339 ']' 00:26:04.244 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1583339 00:26:04.244 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:26:04.244 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:04.244 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1583339 00:26:04.244 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:26:04.244 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:26:04.244 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1583339' 00:26:04.244 killing process with pid 1583339 00:26:04.244 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1583339 00:26:04.244 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1583339 00:26:04.502 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:04.502 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:04.502 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:04.502 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:26:04.502 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:26:04.502 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:04.502 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:26:04.502 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:04.502 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:04.502 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.502 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.502 18:15:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.403 18:15:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:06.403 00:26:06.403 real 0m15.882s 00:26:06.403 user 0m45.377s 00:26:06.403 sys 0m7.147s 00:26:06.403 18:15:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:06.403 18:15:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:06.403 ************************************ 00:26:06.403 END TEST nvmf_target_disconnect 00:26:06.403 ************************************ 00:26:06.661 18:15:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:06.661 00:26:06.661 real 5m6.123s 00:26:06.661 user 10m48.983s 00:26:06.661 sys 1m14.042s 00:26:06.661 18:15:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:06.661 18:15:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.661 ************************************ 00:26:06.661 END TEST nvmf_host 00:26:06.661 ************************************ 00:26:06.661 18:15:29 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:26:06.661 18:15:29 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:26:06.661 18:15:29 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:06.661 18:15:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:06.661 18:15:29 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:06.661 18:15:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:06.661 ************************************ 00:26:06.662 START TEST nvmf_target_core_interrupt_mode 00:26:06.662 ************************************ 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:06.662 * Looking for test storage... 00:26:06.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:06.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.662 --rc genhtml_branch_coverage=1 00:26:06.662 --rc genhtml_function_coverage=1 00:26:06.662 --rc genhtml_legend=1 00:26:06.662 --rc geninfo_all_blocks=1 00:26:06.662 --rc geninfo_unexecuted_blocks=1 00:26:06.662 00:26:06.662 ' 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:06.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.662 --rc genhtml_branch_coverage=1 00:26:06.662 --rc genhtml_function_coverage=1 00:26:06.662 --rc genhtml_legend=1 00:26:06.662 --rc geninfo_all_blocks=1 00:26:06.662 --rc geninfo_unexecuted_blocks=1 00:26:06.662 00:26:06.662 ' 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:06.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.662 --rc genhtml_branch_coverage=1 00:26:06.662 --rc genhtml_function_coverage=1 00:26:06.662 --rc genhtml_legend=1 00:26:06.662 --rc geninfo_all_blocks=1 00:26:06.662 --rc geninfo_unexecuted_blocks=1 00:26:06.662 00:26:06.662 ' 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:06.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.662 --rc genhtml_branch_coverage=1 00:26:06.662 --rc genhtml_function_coverage=1 00:26:06.662 --rc genhtml_legend=1 00:26:06.662 --rc geninfo_all_blocks=1 00:26:06.662 --rc geninfo_unexecuted_blocks=1 00:26:06.662 00:26:06.662 ' 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:26:06.662 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:26:06.663 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:06.663 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:06.663 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:06.663 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:06.663 ************************************ 00:26:06.663 START TEST nvmf_abort 00:26:06.663 ************************************ 00:26:06.663 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:06.922 * Looking for test storage... 00:26:06.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:06.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.922 --rc genhtml_branch_coverage=1 00:26:06.922 --rc genhtml_function_coverage=1 00:26:06.922 --rc genhtml_legend=1 00:26:06.922 --rc geninfo_all_blocks=1 00:26:06.922 --rc geninfo_unexecuted_blocks=1 00:26:06.922 00:26:06.922 ' 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:06.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.922 --rc genhtml_branch_coverage=1 00:26:06.922 --rc genhtml_function_coverage=1 00:26:06.922 --rc genhtml_legend=1 00:26:06.922 --rc geninfo_all_blocks=1 00:26:06.922 --rc geninfo_unexecuted_blocks=1 00:26:06.922 00:26:06.922 ' 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:06.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.922 --rc genhtml_branch_coverage=1 00:26:06.922 --rc genhtml_function_coverage=1 00:26:06.922 --rc genhtml_legend=1 00:26:06.922 --rc geninfo_all_blocks=1 00:26:06.922 --rc geninfo_unexecuted_blocks=1 00:26:06.922 00:26:06.922 ' 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:06.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.922 --rc genhtml_branch_coverage=1 00:26:06.922 --rc genhtml_function_coverage=1 00:26:06.922 --rc genhtml_legend=1 00:26:06.922 --rc geninfo_all_blocks=1 00:26:06.922 --rc geninfo_unexecuted_blocks=1 00:26:06.922 00:26:06.922 ' 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:06.922 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:26:06.923 18:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:09.456 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:09.456 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:09.456 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:09.457 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:09.457 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:09.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:09.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:26:09.457 00:26:09.457 --- 10.0.0.2 ping statistics --- 00:26:09.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.457 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:09.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:09.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:26:09.457 00:26:09.457 --- 10.0.0.1 ping statistics --- 00:26:09.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.457 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1586153 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1586153 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1586153 ']' 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:09.457 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:09.457 [2024-12-09 18:15:32.276643] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:09.457 [2024-12-09 18:15:32.277641] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:26:09.457 [2024-12-09 18:15:32.277705] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:09.457 [2024-12-09 18:15:32.347748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:09.457 [2024-12-09 18:15:32.401752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:09.457 [2024-12-09 18:15:32.401813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:09.457 [2024-12-09 18:15:32.401840] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:09.457 [2024-12-09 18:15:32.401851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:09.457 [2024-12-09 18:15:32.401860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:09.457 [2024-12-09 18:15:32.403355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:09.457 [2024-12-09 18:15:32.403469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:09.457 [2024-12-09 18:15:32.403474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.457 [2024-12-09 18:15:32.488972] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:09.457 [2024-12-09 18:15:32.489189] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:09.457 [2024-12-09 18:15:32.489208] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:09.457 [2024-12-09 18:15:32.489450] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:09.716 [2024-12-09 18:15:32.580195] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:09.716 Malloc0 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:09.716 Delay0 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:09.716 [2024-12-09 18:15:32.652409] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.716 18:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:26:09.974 [2024-12-09 18:15:32.760507] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:26:11.880 Initializing NVMe Controllers 00:26:11.880 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:26:11.880 controller IO queue size 128 less than required 00:26:11.880 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:26:11.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:26:11.880 Initialization complete. Launching workers. 00:26:11.880 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27724 00:26:11.880 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27781, failed to submit 66 00:26:11.881 success 27724, unsuccessful 57, failed 0 00:26:11.881 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:11.881 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.881 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:11.881 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.881 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:11.881 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:26:11.881 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:11.881 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:26:11.881 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:11.881 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:26:11.881 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:11.881 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:11.881 rmmod nvme_tcp 00:26:11.881 rmmod nvme_fabrics 00:26:11.881 rmmod nvme_keyring 00:26:11.881 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:11.881 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:26:11.881 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:26:11.881 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1586153 ']' 00:26:11.881 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1586153 00:26:11.881 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1586153 ']' 00:26:11.881 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1586153 00:26:11.881 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:26:11.881 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:11.881 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1586153 00:26:12.143 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:12.143 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:12.143 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1586153' 00:26:12.143 killing process with pid 1586153 00:26:12.143 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1586153 00:26:12.143 18:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1586153 00:26:12.143 18:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:12.143 18:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:12.143 18:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:12.143 18:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:26:12.143 18:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:26:12.143 18:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:12.143 18:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:26:12.143 18:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:12.143 18:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:12.143 18:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.143 18:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.143 18:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:14.678 00:26:14.678 real 0m7.535s 00:26:14.678 user 0m9.366s 00:26:14.678 sys 0m3.026s 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:14.678 ************************************ 00:26:14.678 END TEST nvmf_abort 00:26:14.678 ************************************ 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:14.678 ************************************ 00:26:14.678 START TEST nvmf_ns_hotplug_stress 00:26:14.678 ************************************ 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:14.678 * Looking for test storage... 00:26:14.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:26:14.678 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:14.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.679 --rc genhtml_branch_coverage=1 00:26:14.679 --rc genhtml_function_coverage=1 00:26:14.679 --rc genhtml_legend=1 00:26:14.679 --rc geninfo_all_blocks=1 00:26:14.679 --rc geninfo_unexecuted_blocks=1 00:26:14.679 00:26:14.679 ' 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:14.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.679 --rc genhtml_branch_coverage=1 00:26:14.679 --rc genhtml_function_coverage=1 00:26:14.679 --rc genhtml_legend=1 00:26:14.679 --rc geninfo_all_blocks=1 00:26:14.679 --rc geninfo_unexecuted_blocks=1 00:26:14.679 00:26:14.679 ' 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:14.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.679 --rc genhtml_branch_coverage=1 00:26:14.679 --rc genhtml_function_coverage=1 00:26:14.679 --rc genhtml_legend=1 00:26:14.679 --rc geninfo_all_blocks=1 00:26:14.679 --rc geninfo_unexecuted_blocks=1 00:26:14.679 00:26:14.679 ' 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:14.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.679 --rc genhtml_branch_coverage=1 00:26:14.679 --rc genhtml_function_coverage=1 00:26:14.679 --rc genhtml_legend=1 00:26:14.679 --rc geninfo_all_blocks=1 00:26:14.679 --rc geninfo_unexecuted_blocks=1 00:26:14.679 00:26:14.679 ' 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:26:14.679 18:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:16.580 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:16.580 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:16.580 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:16.580 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:16.580 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:16.581 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:16.581 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:16.581 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:16.581 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:16.581 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:16.581 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:16.581 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:16.581 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:16.581 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:16.581 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:16.581 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:16.581 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:16.581 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:16.581 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:16.581 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:16.581 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:16.842 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:16.842 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:16.842 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:16.842 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:16.842 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:16.842 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:16.842 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:16.842 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:16.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:16.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:26:16.842 00:26:16.842 --- 10.0.0.2 ping statistics --- 00:26:16.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.842 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:26:16.842 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:16.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:16.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:26:16.842 00:26:16.842 --- 10.0.0.1 ping statistics --- 00:26:16.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.843 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:26:16.843 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:16.843 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:26:16.843 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:16.843 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:16.843 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:16.843 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:16.843 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:16.843 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:16.843 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:16.843 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:26:16.843 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:16.843 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:16.843 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:16.843 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1588377 00:26:16.843 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1588377 00:26:16.843 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1588377 ']' 00:26:16.843 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:16.843 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.843 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:16.843 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.843 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:16.843 18:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:16.843 [2024-12-09 18:15:39.792942] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:16.843 [2024-12-09 18:15:39.794062] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:26:16.843 [2024-12-09 18:15:39.794134] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:16.843 [2024-12-09 18:15:39.869643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:17.102 [2024-12-09 18:15:39.931084] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.102 [2024-12-09 18:15:39.931131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.102 [2024-12-09 18:15:39.931159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:17.102 [2024-12-09 18:15:39.931171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:17.102 [2024-12-09 18:15:39.931180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.102 [2024-12-09 18:15:39.932699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:17.102 [2024-12-09 18:15:39.932753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:17.102 [2024-12-09 18:15:39.932757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.102 [2024-12-09 18:15:40.030021] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:17.102 [2024-12-09 18:15:40.030308] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:17.102 [2024-12-09 18:15:40.030341] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:17.102 [2024-12-09 18:15:40.030631] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:17.102 18:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.102 18:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:26:17.102 18:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:17.102 18:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:17.102 18:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:17.102 18:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:17.102 18:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:26:17.102 18:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:17.360 [2024-12-09 18:15:40.353505] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:17.360 18:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:17.928 18:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.928 [2024-12-09 18:15:40.950057] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.188 18:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:18.446 18:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:26:18.704 Malloc0 00:26:18.704 18:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:18.962 Delay0 00:26:18.962 18:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:19.220 18:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:26:19.477 NULL1 00:26:19.477 18:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:26:19.735 18:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1588791 00:26:19.735 18:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:26:19.735 18:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:19.735 18:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:19.992 18:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:20.250 18:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:26:20.250 18:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:26:20.507 true 00:26:20.765 18:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:20.765 18:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:21.023 18:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:21.280 18:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:26:21.280 18:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:26:21.538 true 00:26:21.538 18:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:21.538 18:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:21.796 18:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:22.053 18:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:26:22.053 18:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:26:22.310 true 00:26:22.310 18:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:22.310 18:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:23.241 Read completed with error (sct=0, sc=11) 00:26:23.241 18:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:23.499 18:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:26:23.499 18:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:26:23.757 true 00:26:23.757 18:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:23.757 18:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:24.014 18:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:24.272 18:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:26:24.272 18:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:26:24.529 true 00:26:24.529 18:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:24.529 18:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:24.787 18:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:25.044 18:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:26:25.044 18:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:26:25.302 true 00:26:25.302 18:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:25.302 18:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:26.234 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:26.234 18:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:26.234 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:26.492 18:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:26:26.492 18:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:26:26.750 true 00:26:27.008 18:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:27.008 18:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:27.266 18:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:27.523 18:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:26:27.523 18:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:26:27.780 true 00:26:27.780 18:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:27.780 18:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:28.040 18:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:28.320 18:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:26:28.320 18:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:26:28.595 true 00:26:28.595 18:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:28.595 18:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:29.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:29.527 18:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:29.785 18:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:26:29.785 18:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:26:30.042 true 00:26:30.042 18:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:30.042 18:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:30.299 18:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:30.557 18:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:26:30.557 18:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:26:30.814 true 00:26:30.814 18:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:30.814 18:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:31.071 18:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:31.328 18:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:26:31.329 18:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:26:31.586 true 00:26:31.586 18:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:31.586 18:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:32.518 18:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:32.775 18:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:26:32.775 18:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:26:33.033 true 00:26:33.033 18:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:33.033 18:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:33.292 18:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:33.551 18:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:26:33.551 18:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:26:33.809 true 00:26:34.067 18:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:34.067 18:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:34.325 18:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:34.583 18:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:26:34.583 18:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:26:34.841 true 00:26:34.841 18:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:34.841 18:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:35.778 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:35.778 18:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:35.778 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:36.036 18:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:26:36.036 18:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:26:36.294 true 00:26:36.294 18:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:36.294 18:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:36.551 18:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:36.809 18:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:26:36.809 18:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:26:37.067 true 00:26:37.067 18:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:37.067 18:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:38.002 18:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:38.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:38.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:38.260 18:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:26:38.260 18:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:26:38.518 true 00:26:38.518 18:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:38.518 18:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:38.775 18:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:39.033 18:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:26:39.033 18:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:26:39.291 true 00:26:39.291 18:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:39.291 18:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:39.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:40.115 18:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:40.115 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:40.373 18:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:26:40.373 18:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:26:40.631 true 00:26:40.631 18:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:40.631 18:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:40.889 18:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:41.147 18:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:26:41.147 18:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:26:41.405 true 00:26:41.405 18:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:41.405 18:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:42.338 18:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:42.338 18:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:26:42.338 18:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:26:42.596 true 00:26:42.596 18:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:42.596 18:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:42.853 18:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:43.111 18:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:26:43.111 18:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:26:43.369 true 00:26:43.369 18:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:43.369 18:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:43.627 18:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:44.193 18:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:26:44.193 18:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:26:44.193 true 00:26:44.193 18:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:44.193 18:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:45.131 18:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:45.697 18:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:26:45.697 18:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:26:45.697 true 00:26:45.697 18:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:45.697 18:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:45.955 18:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:46.213 18:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:26:46.213 18:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:26:46.473 true 00:26:46.733 18:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:46.733 18:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:46.992 18:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:47.251 18:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:26:47.251 18:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:26:47.509 true 00:26:47.509 18:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:47.509 18:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:48.445 18:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:48.445 18:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:26:48.445 18:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:26:49.012 true 00:26:49.012 18:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:49.012 18:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:49.012 18:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:49.578 18:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:26:49.578 18:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:26:49.578 true 00:26:49.578 18:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:49.578 18:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:49.836 18:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:50.096 18:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:26:50.096 18:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:26:50.356 Initializing NVMe Controllers 00:26:50.356 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:50.356 Controller IO queue size 128, less than required. 00:26:50.356 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:50.356 Controller IO queue size 128, less than required. 00:26:50.356 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:50.356 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:50.356 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:50.356 Initialization complete. Launching workers. 00:26:50.356 ======================================================== 00:26:50.356 Latency(us) 00:26:50.356 Device Information : IOPS MiB/s Average min max 00:26:50.356 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 365.29 0.18 128624.12 3314.79 1014932.79 00:26:50.356 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7777.14 3.80 16410.03 2777.53 451139.41 00:26:50.356 ======================================================== 00:26:50.356 Total : 8142.42 3.98 21444.20 2777.53 1014932.79 00:26:50.356 00:26:50.356 true 00:26:50.614 18:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1588791 00:26:50.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1588791) - No such process 00:26:50.614 18:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1588791 00:26:50.614 18:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:50.872 18:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:51.130 18:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:26:51.130 18:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:26:51.130 18:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:26:51.130 18:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:51.130 18:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:26:51.388 null0 00:26:51.388 18:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:51.388 18:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:51.388 18:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:26:51.646 null1 00:26:51.646 18:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:51.646 18:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:51.646 18:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:26:51.904 null2 00:26:51.904 18:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:51.904 18:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:51.904 18:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:26:52.163 null3 00:26:52.163 18:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:52.163 18:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:52.163 18:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:26:52.423 null4 00:26:52.423 18:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:52.423 18:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:52.423 18:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:26:52.681 null5 00:26:52.681 18:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:52.681 18:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:52.681 18:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:26:52.939 null6 00:26:52.939 18:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:52.939 18:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:52.939 18:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:26:53.197 null7 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.197 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1592809 1592810 1592812 1592814 1592816 1592818 1592820 1592822 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.198 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:53.455 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:53.455 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:53.455 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:53.455 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:53.455 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:53.456 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:53.456 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:53.456 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.021 18:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:54.021 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:54.021 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:54.280 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:54.280 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:54.280 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:54.280 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:54.280 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:54.280 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.538 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:54.797 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:54.797 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:54.797 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:54.797 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:54.797 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:54.797 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:54.797 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:54.797 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.055 18:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:55.313 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:55.313 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:55.313 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:55.313 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:55.313 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:55.313 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:55.313 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:55.313 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.572 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:55.831 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:55.831 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:55.831 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:55.831 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:56.089 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:56.089 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:56.089 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:56.089 18:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.347 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:56.606 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:56.606 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:56.606 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:56.606 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:56.606 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:56.606 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:56.606 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:56.606 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:56.863 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.863 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.863 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:56.863 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.863 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.863 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:56.863 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.863 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.864 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:56.864 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.864 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.864 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:56.864 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.864 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.864 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:56.864 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.864 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.864 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:56.864 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.864 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.864 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:56.864 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.864 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.864 18:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:57.122 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:57.122 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:57.122 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:57.122 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:57.122 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:57.122 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:57.122 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:57.122 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.413 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:57.699 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:57.699 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:57.699 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:57.699 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:57.699 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:57.699 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:57.699 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:57.699 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:57.958 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.958 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.958 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:57.958 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.958 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.958 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:57.958 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.958 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.958 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:57.958 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.958 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.958 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:57.958 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.958 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.958 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:57.958 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.958 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.958 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:57.958 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.958 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.958 18:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:58.216 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:58.216 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:58.216 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:58.474 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:58.474 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:58.474 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:58.474 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:58.474 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:58.474 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:58.474 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:58.474 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:58.732 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:58.732 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:58.732 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:58.732 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:58.732 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:58.732 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:58.732 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:58.732 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:58.733 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:58.733 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:58.733 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:58.733 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:58.733 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:58.733 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:58.733 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:58.733 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:58.733 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:58.733 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:58.733 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:58.733 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:58.733 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:58.733 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:58.733 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:58.733 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:58.991 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:58.991 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:58.991 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:58.991 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:58.991 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:58.991 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:58.991 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:58.991 18:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:59.249 rmmod nvme_tcp 00:26:59.249 rmmod nvme_fabrics 00:26:59.249 rmmod nvme_keyring 00:26:59.249 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:59.250 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:26:59.250 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:26:59.250 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1588377 ']' 00:26:59.250 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1588377 00:26:59.250 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1588377 ']' 00:26:59.250 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1588377 00:26:59.250 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:26:59.250 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:59.250 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1588377 00:26:59.250 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:59.250 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:59.250 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1588377' 00:26:59.250 killing process with pid 1588377 00:26:59.250 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1588377 00:26:59.250 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1588377 00:26:59.508 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:59.508 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:59.508 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:59.508 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:26:59.508 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:26:59.508 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:59.508 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:26:59.508 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:59.508 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:59.508 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.508 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:59.508 18:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:02.045 00:27:02.045 real 0m47.325s 00:27:02.045 user 3m18.308s 00:27:02.045 sys 0m21.915s 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:02.045 ************************************ 00:27:02.045 END TEST nvmf_ns_hotplug_stress 00:27:02.045 ************************************ 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:02.045 ************************************ 00:27:02.045 START TEST nvmf_delete_subsystem 00:27:02.045 ************************************ 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:02.045 * Looking for test storage... 00:27:02.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:02.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.045 --rc genhtml_branch_coverage=1 00:27:02.045 --rc genhtml_function_coverage=1 00:27:02.045 --rc genhtml_legend=1 00:27:02.045 --rc geninfo_all_blocks=1 00:27:02.045 --rc geninfo_unexecuted_blocks=1 00:27:02.045 00:27:02.045 ' 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:02.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.045 --rc genhtml_branch_coverage=1 00:27:02.045 --rc genhtml_function_coverage=1 00:27:02.045 --rc genhtml_legend=1 00:27:02.045 --rc geninfo_all_blocks=1 00:27:02.045 --rc geninfo_unexecuted_blocks=1 00:27:02.045 00:27:02.045 ' 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:02.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.045 --rc genhtml_branch_coverage=1 00:27:02.045 --rc genhtml_function_coverage=1 00:27:02.045 --rc genhtml_legend=1 00:27:02.045 --rc geninfo_all_blocks=1 00:27:02.045 --rc geninfo_unexecuted_blocks=1 00:27:02.045 00:27:02.045 ' 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:02.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.045 --rc genhtml_branch_coverage=1 00:27:02.045 --rc genhtml_function_coverage=1 00:27:02.045 --rc genhtml_legend=1 00:27:02.045 --rc geninfo_all_blocks=1 00:27:02.045 --rc geninfo_unexecuted_blocks=1 00:27:02.045 00:27:02.045 ' 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:02.045 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:27:02.046 18:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:03.952 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:03.952 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:27:03.952 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:03.952 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:03.952 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:03.952 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:03.952 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:03.952 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:27:03.952 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:03.952 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:27:03.952 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:27:03.952 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:27:03.952 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:27:03.952 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:27:03.952 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:27:03.952 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:03.952 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:03.952 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:03.952 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:03.952 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:03.952 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:03.952 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:03.953 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:03.953 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:03.953 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:03.953 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:03.953 18:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:04.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:27:04.214 00:27:04.214 --- 10.0.0.2 ping statistics --- 00:27:04.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.214 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:04.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:27:04.214 00:27:04.214 --- 10.0.0.1 ping statistics --- 00:27:04.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.214 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:04.214 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1595573 00:27:04.215 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:27:04.215 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1595573 00:27:04.215 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1595573 ']' 00:27:04.215 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.215 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:04.215 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.215 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:04.215 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:04.215 [2024-12-09 18:16:27.152272] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:04.215 [2024-12-09 18:16:27.153281] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:27:04.215 [2024-12-09 18:16:27.153329] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:04.215 [2024-12-09 18:16:27.223789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:04.474 [2024-12-09 18:16:27.283247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:04.474 [2024-12-09 18:16:27.283307] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:04.474 [2024-12-09 18:16:27.283336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:04.474 [2024-12-09 18:16:27.283346] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:04.474 [2024-12-09 18:16:27.283355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:04.474 [2024-12-09 18:16:27.288570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.474 [2024-12-09 18:16:27.288582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.474 [2024-12-09 18:16:27.382977] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:04.474 [2024-12-09 18:16:27.383010] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:04.474 [2024-12-09 18:16:27.383217] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:04.474 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:04.474 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:27:04.474 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:04.474 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:04.474 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:04.475 [2024-12-09 18:16:27.433280] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:04.475 [2024-12-09 18:16:27.449513] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:04.475 NULL1 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:04.475 Delay0 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1595719 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:27:04.475 18:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:04.733 [2024-12-09 18:16:27.530837] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:06.637 18:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:06.637 18:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.637 18:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:06.895 Write completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Write completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Write completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Write completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Write completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Write completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Write completed with error (sct=0, sc=8) 00:27:06.895 Write completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Write completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Write completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Write completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Write completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Write completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Write completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Write completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Write completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Write completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Write completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Write completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Write completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Write completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.895 starting I/O failed: -6 00:27:06.895 Read completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 starting I/O failed: -6 00:27:06.896 starting I/O failed: -6 00:27:06.896 starting I/O failed: -6 00:27:06.896 starting I/O failed: -6 00:27:06.896 starting I/O failed: -6 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 starting I/O failed: -6 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 Read completed with error (sct=0, sc=8) 00:27:06.896 Write completed with error (sct=0, sc=8) 00:27:06.896 starting I/O failed: -6 00:27:06.896 [2024-12-09 18:16:29.704488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0d30000c40 is same with the state(6) to be set 00:27:07.833 [2024-12-09 18:16:30.668429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b29b0 is same with the state(6) to be set 00:27:07.833 Read completed with error (sct=0, sc=8) 00:27:07.833 Read completed with error (sct=0, sc=8) 00:27:07.833 Read completed with error (sct=0, sc=8) 00:27:07.833 Read completed with error (sct=0, sc=8) 00:27:07.833 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 [2024-12-09 18:16:30.701148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0d3000d350 is same with the state(6) to be set 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 [2024-12-09 18:16:30.706096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b14a0 is same with the state(6) to be set 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 [2024-12-09 18:16:30.706363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b12c0 is same with the state(6) to be set 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 Read completed with error (sct=0, sc=8) 00:27:07.834 Write completed with error (sct=0, sc=8) 00:27:07.834 [2024-12-09 18:16:30.706608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b1860 is same with the state(6) to be set 00:27:07.834 Initializing NVMe Controllers 00:27:07.834 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:07.834 Controller IO queue size 128, less than required. 00:27:07.834 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:07.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:07.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:07.834 Initialization complete. Launching workers. 00:27:07.834 ======================================================== 00:27:07.834 Latency(us) 00:27:07.834 Device Information : IOPS MiB/s Average min max 00:27:07.834 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.98 0.09 960725.88 940.69 1014524.77 00:27:07.834 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 172.60 0.08 867460.54 641.40 1013818.27 00:27:07.834 ======================================================== 00:27:07.834 Total : 360.58 0.18 916081.68 641.40 1014524.77 00:27:07.834 00:27:07.834 [2024-12-09 18:16:30.707538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b29b0 (9): Bad file descriptor 00:27:07.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:27:07.834 18:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.834 18:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:27:07.834 18:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1595719 00:27:07.834 18:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1595719 00:27:08.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1595719) - No such process 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1595719 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1595719 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1595719 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:08.403 [2024-12-09 18:16:31.229483] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1596116 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1596116 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:08.403 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:08.403 [2024-12-09 18:16:31.294590] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:08.969 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:08.969 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1596116 00:27:08.969 18:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:09.229 18:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:09.229 18:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1596116 00:27:09.229 18:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:09.799 18:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:09.799 18:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1596116 00:27:09.799 18:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:10.400 18:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:10.400 18:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1596116 00:27:10.400 18:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:10.966 18:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:10.966 18:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1596116 00:27:10.966 18:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:11.224 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:11.224 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1596116 00:27:11.224 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:11.483 Initializing NVMe Controllers 00:27:11.483 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:11.483 Controller IO queue size 128, less than required. 00:27:11.483 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:11.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:11.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:11.483 Initialization complete. Launching workers. 00:27:11.483 ======================================================== 00:27:11.483 Latency(us) 00:27:11.483 Device Information : IOPS MiB/s Average min max 00:27:11.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004565.68 1000173.01 1011183.81 00:27:11.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004778.18 1000194.54 1011024.87 00:27:11.483 ======================================================== 00:27:11.483 Total : 256.00 0.12 1004671.93 1000173.01 1011183.81 00:27:11.483 00:27:11.742 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:11.742 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1596116 00:27:11.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1596116) - No such process 00:27:11.742 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1596116 00:27:11.742 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:11.742 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:27:11.742 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:11.742 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:27:11.742 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:11.742 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:27:11.742 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:11.742 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:11.742 rmmod nvme_tcp 00:27:12.002 rmmod nvme_fabrics 00:27:12.002 rmmod nvme_keyring 00:27:12.002 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:12.002 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:27:12.002 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:27:12.002 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1595573 ']' 00:27:12.002 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1595573 00:27:12.002 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1595573 ']' 00:27:12.002 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1595573 00:27:12.002 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:27:12.002 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:12.002 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1595573 00:27:12.002 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:12.002 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:12.002 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1595573' 00:27:12.002 killing process with pid 1595573 00:27:12.002 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1595573 00:27:12.002 18:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1595573 00:27:12.264 18:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:12.264 18:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:12.264 18:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:12.264 18:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:27:12.264 18:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:27:12.264 18:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:12.264 18:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:27:12.264 18:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:12.264 18:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:12.264 18:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.264 18:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:12.264 18:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.175 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:14.175 00:27:14.175 real 0m12.510s 00:27:14.175 user 0m24.769s 00:27:14.175 sys 0m3.847s 00:27:14.175 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:14.175 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:14.175 ************************************ 00:27:14.175 END TEST nvmf_delete_subsystem 00:27:14.175 ************************************ 00:27:14.175 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:14.175 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:14.175 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:14.175 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:14.175 ************************************ 00:27:14.175 START TEST nvmf_host_management 00:27:14.175 ************************************ 00:27:14.175 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:14.175 * Looking for test storage... 00:27:14.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:14.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.434 --rc genhtml_branch_coverage=1 00:27:14.434 --rc genhtml_function_coverage=1 00:27:14.434 --rc genhtml_legend=1 00:27:14.434 --rc geninfo_all_blocks=1 00:27:14.434 --rc geninfo_unexecuted_blocks=1 00:27:14.434 00:27:14.434 ' 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:14.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.434 --rc genhtml_branch_coverage=1 00:27:14.434 --rc genhtml_function_coverage=1 00:27:14.434 --rc genhtml_legend=1 00:27:14.434 --rc geninfo_all_blocks=1 00:27:14.434 --rc geninfo_unexecuted_blocks=1 00:27:14.434 00:27:14.434 ' 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:14.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.434 --rc genhtml_branch_coverage=1 00:27:14.434 --rc genhtml_function_coverage=1 00:27:14.434 --rc genhtml_legend=1 00:27:14.434 --rc geninfo_all_blocks=1 00:27:14.434 --rc geninfo_unexecuted_blocks=1 00:27:14.434 00:27:14.434 ' 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:14.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.434 --rc genhtml_branch_coverage=1 00:27:14.434 --rc genhtml_function_coverage=1 00:27:14.434 --rc genhtml_legend=1 00:27:14.434 --rc geninfo_all_blocks=1 00:27:14.434 --rc geninfo_unexecuted_blocks=1 00:27:14.434 00:27:14.434 ' 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.434 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:27:14.435 18:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:16.970 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:16.970 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:16.970 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:16.970 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:16.971 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:16.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:27:16.971 00:27:16.971 --- 10.0.0.2 ping statistics --- 00:27:16.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.971 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:16.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:27:16.971 00:27:16.971 --- 10.0.0.1 ping statistics --- 00:27:16.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.971 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1598457 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1598457 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1598457 ']' 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:16.971 [2024-12-09 18:16:39.625082] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:16.971 [2024-12-09 18:16:39.626178] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:27:16.971 [2024-12-09 18:16:39.626231] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.971 [2024-12-09 18:16:39.698346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:16.971 [2024-12-09 18:16:39.756479] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.971 [2024-12-09 18:16:39.756540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.971 [2024-12-09 18:16:39.756564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.971 [2024-12-09 18:16:39.756575] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.971 [2024-12-09 18:16:39.756585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.971 [2024-12-09 18:16:39.758090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.971 [2024-12-09 18:16:39.758155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:16.971 [2024-12-09 18:16:39.758204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:16.971 [2024-12-09 18:16:39.758208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.971 [2024-12-09 18:16:39.847189] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:16.971 [2024-12-09 18:16:39.847342] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:16.971 [2024-12-09 18:16:39.847646] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:16.971 [2024-12-09 18:16:39.848251] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:16.971 [2024-12-09 18:16:39.848452] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:16.971 [2024-12-09 18:16:39.898932] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:16.971 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:16.972 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:27:16.972 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:27:16.972 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.972 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:16.972 Malloc0 00:27:16.972 [2024-12-09 18:16:39.979129] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.972 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.972 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:27:16.972 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:16.972 18:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:17.230 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1598625 00:27:17.230 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1598625 /var/tmp/bdevperf.sock 00:27:17.230 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1598625 ']' 00:27:17.230 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:17.230 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:27:17.230 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:17.230 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:17.230 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:17.230 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:17.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:17.230 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:17.230 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:17.230 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:17.230 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:17.230 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:17.230 { 00:27:17.230 "params": { 00:27:17.230 "name": "Nvme$subsystem", 00:27:17.230 "trtype": "$TEST_TRANSPORT", 00:27:17.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.230 "adrfam": "ipv4", 00:27:17.230 "trsvcid": "$NVMF_PORT", 00:27:17.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.230 "hdgst": ${hdgst:-false}, 00:27:17.230 "ddgst": ${ddgst:-false} 00:27:17.230 }, 00:27:17.230 "method": "bdev_nvme_attach_controller" 00:27:17.230 } 00:27:17.230 EOF 00:27:17.230 )") 00:27:17.230 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:17.230 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:17.230 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:17.230 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:17.230 "params": { 00:27:17.230 "name": "Nvme0", 00:27:17.230 "trtype": "tcp", 00:27:17.230 "traddr": "10.0.0.2", 00:27:17.230 "adrfam": "ipv4", 00:27:17.230 "trsvcid": "4420", 00:27:17.230 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:17.230 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:17.230 "hdgst": false, 00:27:17.230 "ddgst": false 00:27:17.230 }, 00:27:17.230 "method": "bdev_nvme_attach_controller" 00:27:17.230 }' 00:27:17.230 [2024-12-09 18:16:40.059461] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:27:17.230 [2024-12-09 18:16:40.059573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1598625 ] 00:27:17.230 [2024-12-09 18:16:40.130675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.230 [2024-12-09 18:16:40.190606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.799 Running I/O for 10 seconds... 00:27:17.799 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:17.799 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:27:17.799 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:17.799 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.799 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:17.799 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.799 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:17.799 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:27:17.799 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:17.799 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:27:17.799 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:27:17.799 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:27:17.799 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:27:17.799 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:17.799 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:17.799 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:17.799 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.799 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:17.799 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.799 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:27:17.799 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:27:17.799 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:27:18.059 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:27:18.059 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:18.059 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:18.059 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:18.059 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.059 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:18.059 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.059 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:27:18.059 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:27:18.059 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:27:18.059 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:27:18.059 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:27:18.059 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:18.059 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.059 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:18.059 [2024-12-09 18:16:40.931300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.059 [2024-12-09 18:16:40.931362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.059 [2024-12-09 18:16:40.931392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.931409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.931426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.931441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.931457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.931472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.931487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.931502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.931518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.931541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.931567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.931582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.931597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.931611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.931627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.931642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.931658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.931680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.931698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.931713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.931728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.931743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.931759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.931773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.931788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.931803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.931819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.931854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.931869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.931883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.931899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.931918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.931935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.931949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.931964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.931978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.931993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.932007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.932022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.932036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.932050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.932064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.932083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.932098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.932113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.932127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.932142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.932156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.932171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.932185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.932200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.932215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.932229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.932243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.932258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.932271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.932286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.932300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.932315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.932328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.932343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.932357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.932371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.932385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.932401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.932415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.932429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.932448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.932464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.932477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.932492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.932505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.932520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.932561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.932579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.932593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.932609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.932624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.060 [2024-12-09 18:16:40.932640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.060 [2024-12-09 18:16:40.932654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.932669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.932683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.932698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.932712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.932727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.932741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.932756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.932770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.932785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.932799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.932815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.932829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.932876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.932891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.932906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.932920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.932934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.932948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.932963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.932977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.932992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.933006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.933021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.933034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.933049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.933063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.933079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.933092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.933107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.933121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.933136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.933150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.933182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.933196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.933212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.933226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.933241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.933259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.933276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.933290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.933306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.933321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.933336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.933350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.933366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.061 [2024-12-09 18:16:40.933381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.934581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:18.061 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.061 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:18.061 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.061 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:18.061 task offset: 81920 on job bdev=Nvme0n1 fails 00:27:18.061 00:27:18.061 Latency(us) 00:27:18.061 [2024-12-09T17:16:41.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.061 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:18.061 Job: Nvme0n1 ended in about 0.40 seconds with error 00:27:18.061 Verification LBA range: start 0x0 length 0x400 00:27:18.061 Nvme0n1 : 0.40 1596.62 99.79 159.66 0.00 35368.23 2864.17 35146.71 00:27:18.061 [2024-12-09T17:16:41.102Z] =================================================================================================================== 00:27:18.061 [2024-12-09T17:16:41.102Z] Total : 1596.62 99.79 159.66 0.00 35368.23 2864.17 35146.71 00:27:18.061 [2024-12-09 18:16:40.936492] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:18.061 [2024-12-09 18:16:40.936520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd58660 (9): Bad file descriptor 00:27:18.061 [2024-12-09 18:16:40.937779] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:27:18.061 [2024-12-09 18:16:40.937895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:18.061 [2024-12-09 18:16:40.937923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.061 [2024-12-09 18:16:40.937951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:27:18.061 [2024-12-09 18:16:40.937969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:27:18.061 [2024-12-09 18:16:40.937989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.061 [2024-12-09 18:16:40.938002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd58660 00:27:18.061 [2024-12-09 18:16:40.938037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd58660 (9): Bad file descriptor 00:27:18.061 [2024-12-09 18:16:40.938063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:18.061 [2024-12-09 18:16:40.938078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:18.061 [2024-12-09 18:16:40.938095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:18.061 [2024-12-09 18:16:40.938111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:18.061 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.061 18:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:27:18.997 18:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1598625 00:27:18.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1598625) - No such process 00:27:18.997 18:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:27:18.997 18:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:27:18.997 18:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:18.997 18:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:27:18.997 18:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:18.997 18:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:18.997 18:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:18.997 18:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:18.997 { 00:27:18.997 "params": { 00:27:18.997 "name": "Nvme$subsystem", 00:27:18.997 "trtype": "$TEST_TRANSPORT", 00:27:18.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.997 "adrfam": "ipv4", 00:27:18.997 "trsvcid": "$NVMF_PORT", 00:27:18.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.997 "hdgst": ${hdgst:-false}, 00:27:18.997 "ddgst": ${ddgst:-false} 00:27:18.997 }, 00:27:18.997 "method": "bdev_nvme_attach_controller" 00:27:18.997 } 00:27:18.997 EOF 00:27:18.997 )") 00:27:18.997 18:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:18.997 18:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:18.997 18:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:18.997 18:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:18.997 "params": { 00:27:18.997 "name": "Nvme0", 00:27:18.997 "trtype": "tcp", 00:27:18.997 "traddr": "10.0.0.2", 00:27:18.997 "adrfam": "ipv4", 00:27:18.997 "trsvcid": "4420", 00:27:18.997 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:18.998 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:18.998 "hdgst": false, 00:27:18.998 "ddgst": false 00:27:18.998 }, 00:27:18.998 "method": "bdev_nvme_attach_controller" 00:27:18.998 }' 00:27:18.998 [2024-12-09 18:16:41.995470] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:27:18.998 [2024-12-09 18:16:41.995570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1598783 ] 00:27:19.257 [2024-12-09 18:16:42.066202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.257 [2024-12-09 18:16:42.124987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.516 Running I/O for 1 seconds... 00:27:20.451 1600.00 IOPS, 100.00 MiB/s 00:27:20.451 Latency(us) 00:27:20.451 [2024-12-09T17:16:43.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.451 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.451 Verification LBA range: start 0x0 length 0x400 00:27:20.451 Nvme0n1 : 1.06 1575.75 98.48 0.00 0.00 38568.30 9854.67 49321.91 00:27:20.451 [2024-12-09T17:16:43.492Z] =================================================================================================================== 00:27:20.451 [2024-12-09T17:16:43.492Z] Total : 1575.75 98.48 0.00 0.00 38568.30 9854.67 49321.91 00:27:20.709 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:27:20.709 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:27:20.709 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:20.709 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:20.709 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:27:20.709 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:20.709 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:27:20.709 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:20.709 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:27:20.709 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:20.709 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:20.709 rmmod nvme_tcp 00:27:20.709 rmmod nvme_fabrics 00:27:20.709 rmmod nvme_keyring 00:27:20.969 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:20.969 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:27:20.969 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:27:20.969 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1598457 ']' 00:27:20.969 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1598457 00:27:20.969 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1598457 ']' 00:27:20.969 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1598457 00:27:20.969 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:27:20.969 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:20.969 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1598457 00:27:20.969 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:20.969 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:20.969 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1598457' 00:27:20.969 killing process with pid 1598457 00:27:20.969 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1598457 00:27:20.969 18:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1598457 00:27:21.230 [2024-12-09 18:16:44.027445] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:27:21.230 18:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:21.230 18:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:21.230 18:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:21.230 18:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:27:21.230 18:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:27:21.230 18:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:21.230 18:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:27:21.230 18:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:21.230 18:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:21.230 18:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.230 18:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:21.230 18:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.136 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:23.136 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:23.136 00:27:23.136 real 0m8.938s 00:27:23.136 user 0m18.144s 00:27:23.136 sys 0m3.832s 00:27:23.136 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:23.136 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:23.136 ************************************ 00:27:23.136 END TEST nvmf_host_management 00:27:23.136 ************************************ 00:27:23.136 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:23.136 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:23.136 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:23.136 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:23.136 ************************************ 00:27:23.136 START TEST nvmf_lvol 00:27:23.136 ************************************ 00:27:23.136 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:23.395 * Looking for test storage... 00:27:23.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:23.395 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:23.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.396 --rc genhtml_branch_coverage=1 00:27:23.396 --rc genhtml_function_coverage=1 00:27:23.396 --rc genhtml_legend=1 00:27:23.396 --rc geninfo_all_blocks=1 00:27:23.396 --rc geninfo_unexecuted_blocks=1 00:27:23.396 00:27:23.396 ' 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:23.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.396 --rc genhtml_branch_coverage=1 00:27:23.396 --rc genhtml_function_coverage=1 00:27:23.396 --rc genhtml_legend=1 00:27:23.396 --rc geninfo_all_blocks=1 00:27:23.396 --rc geninfo_unexecuted_blocks=1 00:27:23.396 00:27:23.396 ' 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:23.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.396 --rc genhtml_branch_coverage=1 00:27:23.396 --rc genhtml_function_coverage=1 00:27:23.396 --rc genhtml_legend=1 00:27:23.396 --rc geninfo_all_blocks=1 00:27:23.396 --rc geninfo_unexecuted_blocks=1 00:27:23.396 00:27:23.396 ' 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:23.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.396 --rc genhtml_branch_coverage=1 00:27:23.396 --rc genhtml_function_coverage=1 00:27:23.396 --rc genhtml_legend=1 00:27:23.396 --rc geninfo_all_blocks=1 00:27:23.396 --rc geninfo_unexecuted_blocks=1 00:27:23.396 00:27:23.396 ' 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:27:23.396 18:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:25.930 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:25.930 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:25.930 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:25.930 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:25.930 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:25.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:27:25.931 00:27:25.931 --- 10.0.0.2 ping statistics --- 00:27:25.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.931 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:25.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:27:25.931 00:27:25.931 --- 10.0.0.1 ping statistics --- 00:27:25.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.931 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1600983 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1600983 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1600983 ']' 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:25.931 [2024-12-09 18:16:48.614004] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:25.931 [2024-12-09 18:16:48.615047] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:27:25.931 [2024-12-09 18:16:48.615098] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.931 [2024-12-09 18:16:48.686487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:25.931 [2024-12-09 18:16:48.745854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.931 [2024-12-09 18:16:48.745916] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.931 [2024-12-09 18:16:48.745940] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:25.931 [2024-12-09 18:16:48.745951] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:25.931 [2024-12-09 18:16:48.745960] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.931 [2024-12-09 18:16:48.747478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.931 [2024-12-09 18:16:48.747580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:25.931 [2024-12-09 18:16:48.747586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.931 [2024-12-09 18:16:48.838293] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:25.931 [2024-12-09 18:16:48.838518] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:25.931 [2024-12-09 18:16:48.838577] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:25.931 [2024-12-09 18:16:48.838795] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.931 18:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:26.189 [2024-12-09 18:16:49.152271] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:26.189 18:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:26.448 18:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:27:26.448 18:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:27.016 18:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:27:27.016 18:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:27:27.016 18:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:27:27.584 18:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2af9e7e3-121b-4392-9e9d-67fe70ecd344 00:27:27.585 18:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2af9e7e3-121b-4392-9e9d-67fe70ecd344 lvol 20 00:27:27.585 18:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d82d2696-313f-4566-bb20-9ead9e14bbba 00:27:27.585 18:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:28.152 18:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d82d2696-313f-4566-bb20-9ead9e14bbba 00:27:28.153 18:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:28.411 [2024-12-09 18:16:51.432466] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:28.672 18:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:28.931 18:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1601405 00:27:28.931 18:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:27:28.931 18:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:27:29.867 18:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d82d2696-313f-4566-bb20-9ead9e14bbba MY_SNAPSHOT 00:27:30.125 18:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=267c73e0-11c3-400f-babf-1bbd6900fd4f 00:27:30.125 18:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d82d2696-313f-4566-bb20-9ead9e14bbba 30 00:27:30.384 18:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 267c73e0-11c3-400f-babf-1bbd6900fd4f MY_CLONE 00:27:30.648 18:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=14811cef-9c03-468d-95d5-d0208496c4cb 00:27:30.648 18:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 14811cef-9c03-468d-95d5-d0208496c4cb 00:27:31.291 18:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1601405 00:27:39.411 Initializing NVMe Controllers 00:27:39.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:39.411 Controller IO queue size 128, less than required. 00:27:39.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:39.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:27:39.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:27:39.411 Initialization complete. Launching workers. 00:27:39.411 ======================================================== 00:27:39.411 Latency(us) 00:27:39.411 Device Information : IOPS MiB/s Average min max 00:27:39.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10448.40 40.81 12255.59 5541.28 74459.77 00:27:39.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10367.20 40.50 12354.18 7127.13 57783.79 00:27:39.411 ======================================================== 00:27:39.411 Total : 20815.59 81.31 12304.69 5541.28 74459.77 00:27:39.411 00:27:39.411 18:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:39.411 18:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d82d2696-313f-4566-bb20-9ead9e14bbba 00:27:39.669 18:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2af9e7e3-121b-4392-9e9d-67fe70ecd344 00:27:39.927 18:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:27:39.927 18:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:27:39.927 18:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:27:39.927 18:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:39.927 18:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:27:39.927 18:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:39.927 18:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:27:39.927 18:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:39.927 18:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:39.927 rmmod nvme_tcp 00:27:40.187 rmmod nvme_fabrics 00:27:40.187 rmmod nvme_keyring 00:27:40.187 18:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:40.187 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:27:40.187 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:27:40.187 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1600983 ']' 00:27:40.187 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1600983 00:27:40.187 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1600983 ']' 00:27:40.187 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1600983 00:27:40.187 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:27:40.187 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:40.187 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1600983 00:27:40.187 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:40.187 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:40.187 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1600983' 00:27:40.187 killing process with pid 1600983 00:27:40.187 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1600983 00:27:40.187 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1600983 00:27:40.448 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:40.448 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:40.448 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:40.448 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:27:40.448 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:27:40.448 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:40.448 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:27:40.448 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:40.448 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:40.448 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.448 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:40.448 18:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.355 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:42.355 00:27:42.355 real 0m19.214s 00:27:42.355 user 0m56.756s 00:27:42.355 sys 0m7.494s 00:27:42.355 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:42.355 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:42.355 ************************************ 00:27:42.355 END TEST nvmf_lvol 00:27:42.355 ************************************ 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:42.613 ************************************ 00:27:42.613 START TEST nvmf_lvs_grow 00:27:42.613 ************************************ 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:27:42.613 * Looking for test storage... 00:27:42.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:27:42.613 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:42.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.614 --rc genhtml_branch_coverage=1 00:27:42.614 --rc genhtml_function_coverage=1 00:27:42.614 --rc genhtml_legend=1 00:27:42.614 --rc geninfo_all_blocks=1 00:27:42.614 --rc geninfo_unexecuted_blocks=1 00:27:42.614 00:27:42.614 ' 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:42.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.614 --rc genhtml_branch_coverage=1 00:27:42.614 --rc genhtml_function_coverage=1 00:27:42.614 --rc genhtml_legend=1 00:27:42.614 --rc geninfo_all_blocks=1 00:27:42.614 --rc geninfo_unexecuted_blocks=1 00:27:42.614 00:27:42.614 ' 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:42.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.614 --rc genhtml_branch_coverage=1 00:27:42.614 --rc genhtml_function_coverage=1 00:27:42.614 --rc genhtml_legend=1 00:27:42.614 --rc geninfo_all_blocks=1 00:27:42.614 --rc geninfo_unexecuted_blocks=1 00:27:42.614 00:27:42.614 ' 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:42.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.614 --rc genhtml_branch_coverage=1 00:27:42.614 --rc genhtml_function_coverage=1 00:27:42.614 --rc genhtml_legend=1 00:27:42.614 --rc geninfo_all_blocks=1 00:27:42.614 --rc geninfo_unexecuted_blocks=1 00:27:42.614 00:27:42.614 ' 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:27:42.614 18:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:45.149 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:45.149 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:45.149 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.149 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:45.150 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:45.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:27:45.150 00:27:45.150 --- 10.0.0.2 ping statistics --- 00:27:45.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.150 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:27:45.150 00:27:45.150 --- 10.0.0.1 ping statistics --- 00:27:45.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.150 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1604667 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1604667 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1604667 ']' 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:45.150 18:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:45.150 [2024-12-09 18:17:07.849684] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:45.150 [2024-12-09 18:17:07.850687] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:27:45.150 [2024-12-09 18:17:07.850737] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.150 [2024-12-09 18:17:07.919876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.150 [2024-12-09 18:17:07.973633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.150 [2024-12-09 18:17:07.973698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.150 [2024-12-09 18:17:07.973722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.150 [2024-12-09 18:17:07.973732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.150 [2024-12-09 18:17:07.973742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.150 [2024-12-09 18:17:07.974351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.150 [2024-12-09 18:17:08.061686] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:45.150 [2024-12-09 18:17:08.061952] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:45.150 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:45.150 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:27:45.150 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:45.150 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:45.150 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:45.150 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.150 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:45.409 [2024-12-09 18:17:08.367018] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:45.409 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:27:45.409 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:45.409 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:45.409 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:45.409 ************************************ 00:27:45.409 START TEST lvs_grow_clean 00:27:45.409 ************************************ 00:27:45.409 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:27:45.409 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:27:45.409 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:27:45.409 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:27:45.409 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:27:45.409 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:27:45.409 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:27:45.409 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:45.409 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:45.409 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:45.669 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:27:45.669 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:27:46.239 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7a3c6024-6688-41a4-bf4c-85ab8e03f443 00:27:46.239 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7a3c6024-6688-41a4-bf4c-85ab8e03f443 00:27:46.239 18:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:27:46.239 18:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:27:46.239 18:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:27:46.239 18:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7a3c6024-6688-41a4-bf4c-85ab8e03f443 lvol 150 00:27:46.499 18:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ab2f544e-afa2-48e6-b58e-19c0f8dbfb61 00:27:46.499 18:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:46.499 18:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:27:47.067 [2024-12-09 18:17:09.806888] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:27:47.067 [2024-12-09 18:17:09.806976] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:27:47.067 true 00:27:47.067 18:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7a3c6024-6688-41a4-bf4c-85ab8e03f443 00:27:47.067 18:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:27:47.067 18:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:27:47.067 18:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:47.636 18:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ab2f544e-afa2-48e6-b58e-19c0f8dbfb61 00:27:47.636 18:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:47.896 [2024-12-09 18:17:10.931250] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:48.156 18:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:48.418 18:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1605103 00:27:48.418 18:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:27:48.418 18:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:48.418 18:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1605103 /var/tmp/bdevperf.sock 00:27:48.418 18:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1605103 ']' 00:27:48.418 18:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:48.418 18:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:48.418 18:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:48.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:48.418 18:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:48.418 18:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:27:48.418 [2024-12-09 18:17:11.267053] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:27:48.418 [2024-12-09 18:17:11.267152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605103 ] 00:27:48.418 [2024-12-09 18:17:11.337038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.418 [2024-12-09 18:17:11.399288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.677 18:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:48.677 18:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:27:48.677 18:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:27:48.935 Nvme0n1 00:27:48.935 18:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:27:49.193 [ 00:27:49.193 { 00:27:49.193 "name": "Nvme0n1", 00:27:49.193 "aliases": [ 00:27:49.193 "ab2f544e-afa2-48e6-b58e-19c0f8dbfb61" 00:27:49.193 ], 00:27:49.193 "product_name": "NVMe disk", 00:27:49.193 "block_size": 4096, 00:27:49.193 "num_blocks": 38912, 00:27:49.193 "uuid": "ab2f544e-afa2-48e6-b58e-19c0f8dbfb61", 00:27:49.193 "numa_id": 0, 00:27:49.193 "assigned_rate_limits": { 00:27:49.193 "rw_ios_per_sec": 0, 00:27:49.193 "rw_mbytes_per_sec": 0, 00:27:49.193 "r_mbytes_per_sec": 0, 00:27:49.193 "w_mbytes_per_sec": 0 00:27:49.193 }, 00:27:49.193 "claimed": false, 00:27:49.193 "zoned": false, 00:27:49.193 "supported_io_types": { 00:27:49.193 "read": true, 00:27:49.193 "write": true, 00:27:49.193 "unmap": true, 00:27:49.193 "flush": true, 00:27:49.193 "reset": true, 00:27:49.193 "nvme_admin": true, 00:27:49.193 "nvme_io": true, 00:27:49.193 "nvme_io_md": false, 00:27:49.193 "write_zeroes": true, 00:27:49.193 "zcopy": false, 00:27:49.193 "get_zone_info": false, 00:27:49.193 "zone_management": false, 00:27:49.193 "zone_append": false, 00:27:49.193 "compare": true, 00:27:49.193 "compare_and_write": true, 00:27:49.193 "abort": true, 00:27:49.193 "seek_hole": false, 00:27:49.193 "seek_data": false, 00:27:49.193 "copy": true, 00:27:49.193 "nvme_iov_md": false 00:27:49.193 }, 00:27:49.193 "memory_domains": [ 00:27:49.193 { 00:27:49.193 "dma_device_id": "system", 00:27:49.193 "dma_device_type": 1 00:27:49.193 } 00:27:49.193 ], 00:27:49.193 "driver_specific": { 00:27:49.193 "nvme": [ 00:27:49.193 { 00:27:49.193 "trid": { 00:27:49.193 "trtype": "TCP", 00:27:49.193 "adrfam": "IPv4", 00:27:49.193 "traddr": "10.0.0.2", 00:27:49.193 "trsvcid": "4420", 00:27:49.193 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:49.193 }, 00:27:49.193 "ctrlr_data": { 00:27:49.193 "cntlid": 1, 00:27:49.193 "vendor_id": "0x8086", 00:27:49.193 "model_number": "SPDK bdev Controller", 00:27:49.194 "serial_number": "SPDK0", 00:27:49.194 "firmware_revision": "25.01", 00:27:49.194 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:49.194 "oacs": { 00:27:49.194 "security": 0, 00:27:49.194 "format": 0, 00:27:49.194 "firmware": 0, 00:27:49.194 "ns_manage": 0 00:27:49.194 }, 00:27:49.194 "multi_ctrlr": true, 00:27:49.194 "ana_reporting": false 00:27:49.194 }, 00:27:49.194 "vs": { 00:27:49.194 "nvme_version": "1.3" 00:27:49.194 }, 00:27:49.194 "ns_data": { 00:27:49.194 "id": 1, 00:27:49.194 "can_share": true 00:27:49.194 } 00:27:49.194 } 00:27:49.194 ], 00:27:49.194 "mp_policy": "active_passive" 00:27:49.194 } 00:27:49.194 } 00:27:49.194 ] 00:27:49.194 18:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1605242 00:27:49.194 18:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:49.194 18:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:27:49.452 Running I/O for 10 seconds... 00:27:50.390 Latency(us) 00:27:50.390 [2024-12-09T17:17:13.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:50.390 Nvme0n1 : 1.00 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 00:27:50.390 [2024-12-09T17:17:13.431Z] =================================================================================================================== 00:27:50.390 [2024-12-09T17:17:13.431Z] Total : 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 00:27:50.390 00:27:51.329 18:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7a3c6024-6688-41a4-bf4c-85ab8e03f443 00:27:51.329 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:51.329 Nvme0n1 : 2.00 15589.50 60.90 0.00 0.00 0.00 0.00 0.00 00:27:51.329 [2024-12-09T17:17:14.370Z] =================================================================================================================== 00:27:51.329 [2024-12-09T17:17:14.370Z] Total : 15589.50 60.90 0.00 0.00 0.00 0.00 0.00 00:27:51.329 00:27:51.587 true 00:27:51.587 18:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7a3c6024-6688-41a4-bf4c-85ab8e03f443 00:27:51.587 18:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:27:51.847 18:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:27:51.847 18:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:27:51.847 18:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1605242 00:27:52.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:52.416 Nvme0n1 : 3.00 15674.67 61.23 0.00 0.00 0.00 0.00 0.00 00:27:52.416 [2024-12-09T17:17:15.457Z] =================================================================================================================== 00:27:52.416 [2024-12-09T17:17:15.457Z] Total : 15674.67 61.23 0.00 0.00 0.00 0.00 0.00 00:27:52.416 00:27:53.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:53.350 Nvme0n1 : 4.00 15788.25 61.67 0.00 0.00 0.00 0.00 0.00 00:27:53.350 [2024-12-09T17:17:16.391Z] =================================================================================================================== 00:27:53.350 [2024-12-09T17:17:16.391Z] Total : 15788.25 61.67 0.00 0.00 0.00 0.00 0.00 00:27:53.350 00:27:54.287 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:54.287 Nvme0n1 : 5.00 15856.40 61.94 0.00 0.00 0.00 0.00 0.00 00:27:54.287 [2024-12-09T17:17:17.328Z] =================================================================================================================== 00:27:54.287 [2024-12-09T17:17:17.328Z] Total : 15856.40 61.94 0.00 0.00 0.00 0.00 0.00 00:27:54.287 00:27:55.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:55.661 Nvme0n1 : 6.00 15901.83 62.12 0.00 0.00 0.00 0.00 0.00 00:27:55.661 [2024-12-09T17:17:18.702Z] =================================================================================================================== 00:27:55.661 [2024-12-09T17:17:18.702Z] Total : 15901.83 62.12 0.00 0.00 0.00 0.00 0.00 00:27:55.661 00:27:56.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:56.604 Nvme0n1 : 7.00 15952.43 62.31 0.00 0.00 0.00 0.00 0.00 00:27:56.604 [2024-12-09T17:17:19.645Z] =================================================================================================================== 00:27:56.604 [2024-12-09T17:17:19.645Z] Total : 15952.43 62.31 0.00 0.00 0.00 0.00 0.00 00:27:56.604 00:27:57.541 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:57.541 Nvme0n1 : 8.00 15965.00 62.36 0.00 0.00 0.00 0.00 0.00 00:27:57.541 [2024-12-09T17:17:20.582Z] =================================================================================================================== 00:27:57.541 [2024-12-09T17:17:20.582Z] Total : 15965.00 62.36 0.00 0.00 0.00 0.00 0.00 00:27:57.541 00:27:58.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:58.479 Nvme0n1 : 9.00 15995.22 62.48 0.00 0.00 0.00 0.00 0.00 00:27:58.479 [2024-12-09T17:17:21.520Z] =================================================================================================================== 00:27:58.479 [2024-12-09T17:17:21.520Z] Total : 15995.22 62.48 0.00 0.00 0.00 0.00 0.00 00:27:58.479 00:27:59.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:59.412 Nvme0n1 : 10.00 16021.30 62.58 0.00 0.00 0.00 0.00 0.00 00:27:59.412 [2024-12-09T17:17:22.453Z] =================================================================================================================== 00:27:59.412 [2024-12-09T17:17:22.453Z] Total : 16021.30 62.58 0.00 0.00 0.00 0.00 0.00 00:27:59.412 00:27:59.412 00:27:59.412 Latency(us) 00:27:59.412 [2024-12-09T17:17:22.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:59.412 Nvme0n1 : 10.01 16021.15 62.58 0.00 0.00 7984.71 4344.79 17573.36 00:27:59.412 [2024-12-09T17:17:22.453Z] =================================================================================================================== 00:27:59.412 [2024-12-09T17:17:22.453Z] Total : 16021.15 62.58 0.00 0.00 7984.71 4344.79 17573.36 00:27:59.412 { 00:27:59.412 "results": [ 00:27:59.412 { 00:27:59.412 "job": "Nvme0n1", 00:27:59.412 "core_mask": "0x2", 00:27:59.412 "workload": "randwrite", 00:27:59.412 "status": "finished", 00:27:59.412 "queue_depth": 128, 00:27:59.412 "io_size": 4096, 00:27:59.412 "runtime": 10.008081, 00:27:59.412 "iops": 16021.15330601341, 00:27:59.412 "mibps": 62.58263010161488, 00:27:59.412 "io_failed": 0, 00:27:59.412 "io_timeout": 0, 00:27:59.412 "avg_latency_us": 7984.713215810655, 00:27:59.412 "min_latency_us": 4344.794074074074, 00:27:59.412 "max_latency_us": 17573.357037037036 00:27:59.412 } 00:27:59.412 ], 00:27:59.412 "core_count": 1 00:27:59.412 } 00:27:59.412 18:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1605103 00:27:59.412 18:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1605103 ']' 00:27:59.412 18:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1605103 00:27:59.412 18:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:27:59.412 18:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:59.412 18:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1605103 00:27:59.412 18:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:59.412 18:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:59.412 18:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1605103' 00:27:59.412 killing process with pid 1605103 00:27:59.412 18:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1605103 00:27:59.412 Received shutdown signal, test time was about 10.000000 seconds 00:27:59.412 00:27:59.412 Latency(us) 00:27:59.412 [2024-12-09T17:17:22.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.412 [2024-12-09T17:17:22.453Z] =================================================================================================================== 00:27:59.412 [2024-12-09T17:17:22.453Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:59.412 18:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1605103 00:27:59.671 18:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:59.929 18:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:00.186 18:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7a3c6024-6688-41a4-bf4c-85ab8e03f443 00:28:00.186 18:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:00.445 18:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:00.445 18:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:28:00.445 18:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:00.703 [2024-12-09 18:17:23.638926] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:00.703 18:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7a3c6024-6688-41a4-bf4c-85ab8e03f443 00:28:00.703 18:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:28:00.703 18:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7a3c6024-6688-41a4-bf4c-85ab8e03f443 00:28:00.703 18:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:00.704 18:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:00.704 18:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:00.704 18:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:00.704 18:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:00.704 18:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:00.704 18:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:00.704 18:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:00.704 18:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7a3c6024-6688-41a4-bf4c-85ab8e03f443 00:28:00.961 request: 00:28:00.961 { 00:28:00.961 "uuid": "7a3c6024-6688-41a4-bf4c-85ab8e03f443", 00:28:00.961 "method": "bdev_lvol_get_lvstores", 00:28:00.961 "req_id": 1 00:28:00.961 } 00:28:00.961 Got JSON-RPC error response 00:28:00.961 response: 00:28:00.961 { 00:28:00.961 "code": -19, 00:28:00.961 "message": "No such device" 00:28:00.961 } 00:28:00.961 18:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:28:00.961 18:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:00.961 18:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:00.961 18:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:00.961 18:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:01.219 aio_bdev 00:28:01.219 18:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ab2f544e-afa2-48e6-b58e-19c0f8dbfb61 00:28:01.219 18:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=ab2f544e-afa2-48e6-b58e-19c0f8dbfb61 00:28:01.219 18:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:01.219 18:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:28:01.219 18:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:01.219 18:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:01.219 18:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:01.478 18:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ab2f544e-afa2-48e6-b58e-19c0f8dbfb61 -t 2000 00:28:01.738 [ 00:28:01.738 { 00:28:01.738 "name": "ab2f544e-afa2-48e6-b58e-19c0f8dbfb61", 00:28:01.738 "aliases": [ 00:28:01.738 "lvs/lvol" 00:28:01.738 ], 00:28:01.738 "product_name": "Logical Volume", 00:28:01.738 "block_size": 4096, 00:28:01.738 "num_blocks": 38912, 00:28:01.738 "uuid": "ab2f544e-afa2-48e6-b58e-19c0f8dbfb61", 00:28:01.738 "assigned_rate_limits": { 00:28:01.738 "rw_ios_per_sec": 0, 00:28:01.738 "rw_mbytes_per_sec": 0, 00:28:01.738 "r_mbytes_per_sec": 0, 00:28:01.738 "w_mbytes_per_sec": 0 00:28:01.738 }, 00:28:01.738 "claimed": false, 00:28:01.738 "zoned": false, 00:28:01.738 "supported_io_types": { 00:28:01.738 "read": true, 00:28:01.738 "write": true, 00:28:01.738 "unmap": true, 00:28:01.738 "flush": false, 00:28:01.738 "reset": true, 00:28:01.738 "nvme_admin": false, 00:28:01.738 "nvme_io": false, 00:28:01.738 "nvme_io_md": false, 00:28:01.738 "write_zeroes": true, 00:28:01.738 "zcopy": false, 00:28:01.738 "get_zone_info": false, 00:28:01.738 "zone_management": false, 00:28:01.738 "zone_append": false, 00:28:01.738 "compare": false, 00:28:01.738 "compare_and_write": false, 00:28:01.738 "abort": false, 00:28:01.738 "seek_hole": true, 00:28:01.738 "seek_data": true, 00:28:01.738 "copy": false, 00:28:01.738 "nvme_iov_md": false 00:28:01.738 }, 00:28:01.738 "driver_specific": { 00:28:01.738 "lvol": { 00:28:01.738 "lvol_store_uuid": "7a3c6024-6688-41a4-bf4c-85ab8e03f443", 00:28:01.738 "base_bdev": "aio_bdev", 00:28:01.738 "thin_provision": false, 00:28:01.738 "num_allocated_clusters": 38, 00:28:01.738 "snapshot": false, 00:28:01.738 "clone": false, 00:28:01.738 "esnap_clone": false 00:28:01.738 } 00:28:01.738 } 00:28:01.738 } 00:28:01.738 ] 00:28:01.738 18:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:28:01.738 18:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7a3c6024-6688-41a4-bf4c-85ab8e03f443 00:28:01.738 18:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:02.307 18:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:02.307 18:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7a3c6024-6688-41a4-bf4c-85ab8e03f443 00:28:02.307 18:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:02.307 18:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:02.308 18:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ab2f544e-afa2-48e6-b58e-19c0f8dbfb61 00:28:02.565 18:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7a3c6024-6688-41a4-bf4c-85ab8e03f443 00:28:03.131 18:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:03.132 18:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:03.389 00:28:03.389 real 0m17.762s 00:28:03.389 user 0m17.255s 00:28:03.390 sys 0m1.917s 00:28:03.390 18:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:03.390 18:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:03.390 ************************************ 00:28:03.390 END TEST lvs_grow_clean 00:28:03.390 ************************************ 00:28:03.390 18:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:28:03.390 18:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:03.390 18:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:03.390 18:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:03.390 ************************************ 00:28:03.390 START TEST lvs_grow_dirty 00:28:03.390 ************************************ 00:28:03.390 18:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:28:03.390 18:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:03.390 18:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:03.390 18:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:03.390 18:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:03.390 18:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:03.390 18:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:03.390 18:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:03.390 18:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:03.390 18:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:03.648 18:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:03.648 18:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:03.908 18:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ecf158a1-22e0-467a-94dc-9f02e29dfd85 00:28:03.908 18:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ecf158a1-22e0-467a-94dc-9f02e29dfd85 00:28:03.908 18:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:04.168 18:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:04.168 18:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:04.168 18:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ecf158a1-22e0-467a-94dc-9f02e29dfd85 lvol 150 00:28:04.429 18:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7d789ca6-2f31-4cd2-bd48-2839d02ffb71 00:28:04.429 18:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:04.429 18:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:04.688 [2024-12-09 18:17:27.598876] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:04.688 [2024-12-09 18:17:27.598960] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:04.688 true 00:28:04.688 18:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ecf158a1-22e0-467a-94dc-9f02e29dfd85 00:28:04.688 18:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:04.946 18:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:04.946 18:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:05.204 18:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7d789ca6-2f31-4cd2-bd48-2839d02ffb71 00:28:05.461 18:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:05.721 [2024-12-09 18:17:28.691167] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.721 18:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:05.979 18:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1607175 00:28:05.979 18:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:05.979 18:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:05.979 18:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1607175 /var/tmp/bdevperf.sock 00:28:05.979 18:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1607175 ']' 00:28:05.979 18:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:05.980 18:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:05.980 18:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:05.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:05.980 18:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:05.980 18:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:06.240 [2024-12-09 18:17:29.030635] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:28:06.240 [2024-12-09 18:17:29.030717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1607175 ] 00:28:06.240 [2024-12-09 18:17:29.101038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.240 [2024-12-09 18:17:29.162126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.240 18:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:06.500 18:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:28:06.500 18:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:06.785 Nvme0n1 00:28:06.785 18:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:07.071 [ 00:28:07.071 { 00:28:07.071 "name": "Nvme0n1", 00:28:07.071 "aliases": [ 00:28:07.071 "7d789ca6-2f31-4cd2-bd48-2839d02ffb71" 00:28:07.071 ], 00:28:07.071 "product_name": "NVMe disk", 00:28:07.071 "block_size": 4096, 00:28:07.071 "num_blocks": 38912, 00:28:07.071 "uuid": "7d789ca6-2f31-4cd2-bd48-2839d02ffb71", 00:28:07.071 "numa_id": 0, 00:28:07.071 "assigned_rate_limits": { 00:28:07.071 "rw_ios_per_sec": 0, 00:28:07.071 "rw_mbytes_per_sec": 0, 00:28:07.071 "r_mbytes_per_sec": 0, 00:28:07.071 "w_mbytes_per_sec": 0 00:28:07.071 }, 00:28:07.071 "claimed": false, 00:28:07.071 "zoned": false, 00:28:07.071 "supported_io_types": { 00:28:07.071 "read": true, 00:28:07.071 "write": true, 00:28:07.071 "unmap": true, 00:28:07.071 "flush": true, 00:28:07.071 "reset": true, 00:28:07.071 "nvme_admin": true, 00:28:07.071 "nvme_io": true, 00:28:07.071 "nvme_io_md": false, 00:28:07.071 "write_zeroes": true, 00:28:07.071 "zcopy": false, 00:28:07.071 "get_zone_info": false, 00:28:07.071 "zone_management": false, 00:28:07.071 "zone_append": false, 00:28:07.071 "compare": true, 00:28:07.071 "compare_and_write": true, 00:28:07.071 "abort": true, 00:28:07.071 "seek_hole": false, 00:28:07.071 "seek_data": false, 00:28:07.071 "copy": true, 00:28:07.071 "nvme_iov_md": false 00:28:07.071 }, 00:28:07.071 "memory_domains": [ 00:28:07.071 { 00:28:07.071 "dma_device_id": "system", 00:28:07.071 "dma_device_type": 1 00:28:07.071 } 00:28:07.071 ], 00:28:07.071 "driver_specific": { 00:28:07.071 "nvme": [ 00:28:07.071 { 00:28:07.071 "trid": { 00:28:07.071 "trtype": "TCP", 00:28:07.071 "adrfam": "IPv4", 00:28:07.071 "traddr": "10.0.0.2", 00:28:07.071 "trsvcid": "4420", 00:28:07.071 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:07.071 }, 00:28:07.071 "ctrlr_data": { 00:28:07.071 "cntlid": 1, 00:28:07.071 "vendor_id": "0x8086", 00:28:07.071 "model_number": "SPDK bdev Controller", 00:28:07.071 "serial_number": "SPDK0", 00:28:07.071 "firmware_revision": "25.01", 00:28:07.071 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:07.071 "oacs": { 00:28:07.071 "security": 0, 00:28:07.071 "format": 0, 00:28:07.071 "firmware": 0, 00:28:07.071 "ns_manage": 0 00:28:07.071 }, 00:28:07.071 "multi_ctrlr": true, 00:28:07.071 "ana_reporting": false 00:28:07.071 }, 00:28:07.071 "vs": { 00:28:07.071 "nvme_version": "1.3" 00:28:07.071 }, 00:28:07.071 "ns_data": { 00:28:07.071 "id": 1, 00:28:07.071 "can_share": true 00:28:07.071 } 00:28:07.071 } 00:28:07.071 ], 00:28:07.071 "mp_policy": "active_passive" 00:28:07.071 } 00:28:07.071 } 00:28:07.071 ] 00:28:07.071 18:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1607287 00:28:07.071 18:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:07.071 18:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:07.330 Running I/O for 10 seconds... 00:28:08.269 Latency(us) 00:28:08.269 [2024-12-09T17:17:31.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.269 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:08.269 Nvme0n1 : 1.00 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:28:08.269 [2024-12-09T17:17:31.310Z] =================================================================================================================== 00:28:08.269 [2024-12-09T17:17:31.310Z] Total : 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:28:08.269 00:28:09.206 18:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ecf158a1-22e0-467a-94dc-9f02e29dfd85 00:28:09.206 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:09.206 Nvme0n1 : 2.00 15208.50 59.41 0.00 0.00 0.00 0.00 0.00 00:28:09.206 [2024-12-09T17:17:32.247Z] =================================================================================================================== 00:28:09.206 [2024-12-09T17:17:32.247Z] Total : 15208.50 59.41 0.00 0.00 0.00 0.00 0.00 00:28:09.206 00:28:09.464 true 00:28:09.464 18:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ecf158a1-22e0-467a-94dc-9f02e29dfd85 00:28:09.464 18:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:09.724 18:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:09.724 18:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:09.724 18:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1607287 00:28:10.294 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:10.294 Nvme0n1 : 3.00 15293.67 59.74 0.00 0.00 0.00 0.00 0.00 00:28:10.294 [2024-12-09T17:17:33.335Z] =================================================================================================================== 00:28:10.294 [2024-12-09T17:17:33.335Z] Total : 15293.67 59.74 0.00 0.00 0.00 0.00 0.00 00:28:10.294 00:28:11.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:11.235 Nvme0n1 : 4.00 15352.25 59.97 0.00 0.00 0.00 0.00 0.00 00:28:11.235 [2024-12-09T17:17:34.276Z] =================================================================================================================== 00:28:11.235 [2024-12-09T17:17:34.276Z] Total : 15352.25 59.97 0.00 0.00 0.00 0.00 0.00 00:28:11.235 00:28:12.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:12.616 Nvme0n1 : 5.00 15418.80 60.23 0.00 0.00 0.00 0.00 0.00 00:28:12.616 [2024-12-09T17:17:35.657Z] =================================================================================================================== 00:28:12.616 [2024-12-09T17:17:35.657Z] Total : 15418.80 60.23 0.00 0.00 0.00 0.00 0.00 00:28:12.616 00:28:13.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:13.551 Nvme0n1 : 6.00 15489.83 60.51 0.00 0.00 0.00 0.00 0.00 00:28:13.551 [2024-12-09T17:17:36.592Z] =================================================================================================================== 00:28:13.551 [2024-12-09T17:17:36.592Z] Total : 15489.83 60.51 0.00 0.00 0.00 0.00 0.00 00:28:13.551 00:28:14.488 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:14.488 Nvme0n1 : 7.00 15563.00 60.79 0.00 0.00 0.00 0.00 0.00 00:28:14.488 [2024-12-09T17:17:37.529Z] =================================================================================================================== 00:28:14.488 [2024-12-09T17:17:37.529Z] Total : 15563.00 60.79 0.00 0.00 0.00 0.00 0.00 00:28:14.488 00:28:15.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:15.423 Nvme0n1 : 8.00 15602.00 60.95 0.00 0.00 0.00 0.00 0.00 00:28:15.423 [2024-12-09T17:17:38.464Z] =================================================================================================================== 00:28:15.423 [2024-12-09T17:17:38.464Z] Total : 15602.00 60.95 0.00 0.00 0.00 0.00 0.00 00:28:15.423 00:28:16.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:16.361 Nvme0n1 : 9.00 15646.44 61.12 0.00 0.00 0.00 0.00 0.00 00:28:16.361 [2024-12-09T17:17:39.402Z] =================================================================================================================== 00:28:16.361 [2024-12-09T17:17:39.402Z] Total : 15646.44 61.12 0.00 0.00 0.00 0.00 0.00 00:28:16.361 00:28:17.295 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:17.295 Nvme0n1 : 10.00 15669.30 61.21 0.00 0.00 0.00 0.00 0.00 00:28:17.295 [2024-12-09T17:17:40.336Z] =================================================================================================================== 00:28:17.295 [2024-12-09T17:17:40.337Z] Total : 15669.30 61.21 0.00 0.00 0.00 0.00 0.00 00:28:17.296 00:28:17.296 00:28:17.296 Latency(us) 00:28:17.296 [2024-12-09T17:17:40.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:17.296 Nvme0n1 : 10.00 15675.93 61.23 0.00 0.00 8160.57 4247.70 18058.81 00:28:17.296 [2024-12-09T17:17:40.337Z] =================================================================================================================== 00:28:17.296 [2024-12-09T17:17:40.337Z] Total : 15675.93 61.23 0.00 0.00 8160.57 4247.70 18058.81 00:28:17.296 { 00:28:17.296 "results": [ 00:28:17.296 { 00:28:17.296 "job": "Nvme0n1", 00:28:17.296 "core_mask": "0x2", 00:28:17.296 "workload": "randwrite", 00:28:17.296 "status": "finished", 00:28:17.296 "queue_depth": 128, 00:28:17.296 "io_size": 4096, 00:28:17.296 "runtime": 10.003933, 00:28:17.296 "iops": 15675.934654900228, 00:28:17.296 "mibps": 61.234119745704014, 00:28:17.296 "io_failed": 0, 00:28:17.296 "io_timeout": 0, 00:28:17.296 "avg_latency_us": 8160.569499237985, 00:28:17.296 "min_latency_us": 4247.7037037037035, 00:28:17.296 "max_latency_us": 18058.80888888889 00:28:17.296 } 00:28:17.296 ], 00:28:17.296 "core_count": 1 00:28:17.296 } 00:28:17.296 18:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1607175 00:28:17.296 18:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1607175 ']' 00:28:17.296 18:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1607175 00:28:17.296 18:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:28:17.296 18:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:17.296 18:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1607175 00:28:17.296 18:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:17.296 18:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:17.296 18:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1607175' 00:28:17.296 killing process with pid 1607175 00:28:17.296 18:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1607175 00:28:17.296 Received shutdown signal, test time was about 10.000000 seconds 00:28:17.296 00:28:17.296 Latency(us) 00:28:17.296 [2024-12-09T17:17:40.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.296 [2024-12-09T17:17:40.337Z] =================================================================================================================== 00:28:17.296 [2024-12-09T17:17:40.337Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:17.296 18:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1607175 00:28:17.554 18:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:17.814 18:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:18.073 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ecf158a1-22e0-467a-94dc-9f02e29dfd85 00:28:18.073 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:18.334 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:18.334 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:28:18.334 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1604667 00:28:18.334 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1604667 00:28:18.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1604667 Killed "${NVMF_APP[@]}" "$@" 00:28:18.334 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:28:18.334 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:28:18.334 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:18.334 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:18.334 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:18.334 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1608597 00:28:18.334 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:18.334 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1608597 00:28:18.334 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1608597 ']' 00:28:18.334 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.334 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:18.334 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.334 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:18.334 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:18.593 [2024-12-09 18:17:41.398718] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:18.593 [2024-12-09 18:17:41.399823] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:28:18.593 [2024-12-09 18:17:41.399902] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:18.593 [2024-12-09 18:17:41.474574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.593 [2024-12-09 18:17:41.531962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:18.593 [2024-12-09 18:17:41.532023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:18.593 [2024-12-09 18:17:41.532037] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:18.593 [2024-12-09 18:17:41.532057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:18.593 [2024-12-09 18:17:41.532083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:18.593 [2024-12-09 18:17:41.532668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.593 [2024-12-09 18:17:41.624258] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:18.593 [2024-12-09 18:17:41.624509] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:18.851 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:18.851 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:28:18.851 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:18.851 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:18.851 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:18.851 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:18.851 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:19.109 [2024-12-09 18:17:41.931407] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:19.109 [2024-12-09 18:17:41.931573] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:19.109 [2024-12-09 18:17:41.931624] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:19.109 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:28:19.109 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7d789ca6-2f31-4cd2-bd48-2839d02ffb71 00:28:19.109 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7d789ca6-2f31-4cd2-bd48-2839d02ffb71 00:28:19.109 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:19.109 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:28:19.109 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:19.109 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:19.109 18:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:19.367 18:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7d789ca6-2f31-4cd2-bd48-2839d02ffb71 -t 2000 00:28:19.626 [ 00:28:19.626 { 00:28:19.626 "name": "7d789ca6-2f31-4cd2-bd48-2839d02ffb71", 00:28:19.626 "aliases": [ 00:28:19.626 "lvs/lvol" 00:28:19.626 ], 00:28:19.626 "product_name": "Logical Volume", 00:28:19.626 "block_size": 4096, 00:28:19.626 "num_blocks": 38912, 00:28:19.626 "uuid": "7d789ca6-2f31-4cd2-bd48-2839d02ffb71", 00:28:19.626 "assigned_rate_limits": { 00:28:19.626 "rw_ios_per_sec": 0, 00:28:19.626 "rw_mbytes_per_sec": 0, 00:28:19.626 "r_mbytes_per_sec": 0, 00:28:19.626 "w_mbytes_per_sec": 0 00:28:19.626 }, 00:28:19.626 "claimed": false, 00:28:19.626 "zoned": false, 00:28:19.626 "supported_io_types": { 00:28:19.626 "read": true, 00:28:19.626 "write": true, 00:28:19.626 "unmap": true, 00:28:19.626 "flush": false, 00:28:19.626 "reset": true, 00:28:19.627 "nvme_admin": false, 00:28:19.627 "nvme_io": false, 00:28:19.627 "nvme_io_md": false, 00:28:19.627 "write_zeroes": true, 00:28:19.627 "zcopy": false, 00:28:19.627 "get_zone_info": false, 00:28:19.627 "zone_management": false, 00:28:19.627 "zone_append": false, 00:28:19.627 "compare": false, 00:28:19.627 "compare_and_write": false, 00:28:19.627 "abort": false, 00:28:19.627 "seek_hole": true, 00:28:19.627 "seek_data": true, 00:28:19.627 "copy": false, 00:28:19.627 "nvme_iov_md": false 00:28:19.627 }, 00:28:19.627 "driver_specific": { 00:28:19.627 "lvol": { 00:28:19.627 "lvol_store_uuid": "ecf158a1-22e0-467a-94dc-9f02e29dfd85", 00:28:19.627 "base_bdev": "aio_bdev", 00:28:19.627 "thin_provision": false, 00:28:19.627 "num_allocated_clusters": 38, 00:28:19.627 "snapshot": false, 00:28:19.627 "clone": false, 00:28:19.627 "esnap_clone": false 00:28:19.627 } 00:28:19.627 } 00:28:19.627 } 00:28:19.627 ] 00:28:19.627 18:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:28:19.627 18:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ecf158a1-22e0-467a-94dc-9f02e29dfd85 00:28:19.627 18:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:28:19.886 18:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:28:19.886 18:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ecf158a1-22e0-467a-94dc-9f02e29dfd85 00:28:19.887 18:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:28:20.147 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:28:20.147 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:20.407 [2024-12-09 18:17:43.305254] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:20.407 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ecf158a1-22e0-467a-94dc-9f02e29dfd85 00:28:20.407 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:28:20.407 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ecf158a1-22e0-467a-94dc-9f02e29dfd85 00:28:20.407 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:20.407 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:20.407 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:20.408 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:20.408 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:20.408 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:20.408 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:20.408 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:20.408 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ecf158a1-22e0-467a-94dc-9f02e29dfd85 00:28:20.666 request: 00:28:20.666 { 00:28:20.666 "uuid": "ecf158a1-22e0-467a-94dc-9f02e29dfd85", 00:28:20.666 "method": "bdev_lvol_get_lvstores", 00:28:20.666 "req_id": 1 00:28:20.666 } 00:28:20.666 Got JSON-RPC error response 00:28:20.666 response: 00:28:20.666 { 00:28:20.666 "code": -19, 00:28:20.666 "message": "No such device" 00:28:20.666 } 00:28:20.666 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:28:20.666 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:20.666 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:20.666 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:20.666 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:20.925 aio_bdev 00:28:20.925 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7d789ca6-2f31-4cd2-bd48-2839d02ffb71 00:28:20.925 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7d789ca6-2f31-4cd2-bd48-2839d02ffb71 00:28:20.925 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:20.925 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:28:20.925 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:20.925 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:20.925 18:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:21.183 18:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7d789ca6-2f31-4cd2-bd48-2839d02ffb71 -t 2000 00:28:21.441 [ 00:28:21.441 { 00:28:21.441 "name": "7d789ca6-2f31-4cd2-bd48-2839d02ffb71", 00:28:21.441 "aliases": [ 00:28:21.441 "lvs/lvol" 00:28:21.441 ], 00:28:21.441 "product_name": "Logical Volume", 00:28:21.441 "block_size": 4096, 00:28:21.441 "num_blocks": 38912, 00:28:21.441 "uuid": "7d789ca6-2f31-4cd2-bd48-2839d02ffb71", 00:28:21.441 "assigned_rate_limits": { 00:28:21.441 "rw_ios_per_sec": 0, 00:28:21.441 "rw_mbytes_per_sec": 0, 00:28:21.441 "r_mbytes_per_sec": 0, 00:28:21.441 "w_mbytes_per_sec": 0 00:28:21.441 }, 00:28:21.441 "claimed": false, 00:28:21.441 "zoned": false, 00:28:21.441 "supported_io_types": { 00:28:21.441 "read": true, 00:28:21.441 "write": true, 00:28:21.441 "unmap": true, 00:28:21.441 "flush": false, 00:28:21.441 "reset": true, 00:28:21.441 "nvme_admin": false, 00:28:21.441 "nvme_io": false, 00:28:21.441 "nvme_io_md": false, 00:28:21.441 "write_zeroes": true, 00:28:21.441 "zcopy": false, 00:28:21.441 "get_zone_info": false, 00:28:21.441 "zone_management": false, 00:28:21.441 "zone_append": false, 00:28:21.441 "compare": false, 00:28:21.441 "compare_and_write": false, 00:28:21.441 "abort": false, 00:28:21.441 "seek_hole": true, 00:28:21.441 "seek_data": true, 00:28:21.441 "copy": false, 00:28:21.441 "nvme_iov_md": false 00:28:21.441 }, 00:28:21.441 "driver_specific": { 00:28:21.441 "lvol": { 00:28:21.441 "lvol_store_uuid": "ecf158a1-22e0-467a-94dc-9f02e29dfd85", 00:28:21.441 "base_bdev": "aio_bdev", 00:28:21.441 "thin_provision": false, 00:28:21.441 "num_allocated_clusters": 38, 00:28:21.441 "snapshot": false, 00:28:21.441 "clone": false, 00:28:21.441 "esnap_clone": false 00:28:21.441 } 00:28:21.441 } 00:28:21.441 } 00:28:21.441 ] 00:28:21.441 18:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:28:21.441 18:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ecf158a1-22e0-467a-94dc-9f02e29dfd85 00:28:21.441 18:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:21.701 18:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:21.701 18:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ecf158a1-22e0-467a-94dc-9f02e29dfd85 00:28:21.701 18:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:21.960 18:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:21.960 18:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7d789ca6-2f31-4cd2-bd48-2839d02ffb71 00:28:22.529 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ecf158a1-22e0-467a-94dc-9f02e29dfd85 00:28:22.529 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:22.789 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:23.049 00:28:23.049 real 0m19.615s 00:28:23.049 user 0m36.848s 00:28:23.049 sys 0m4.561s 00:28:23.049 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:23.049 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:23.049 ************************************ 00:28:23.049 END TEST lvs_grow_dirty 00:28:23.049 ************************************ 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:28:23.050 nvmf_trace.0 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:23.050 rmmod nvme_tcp 00:28:23.050 rmmod nvme_fabrics 00:28:23.050 rmmod nvme_keyring 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1608597 ']' 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1608597 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1608597 ']' 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1608597 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:23.050 18:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1608597 00:28:23.050 18:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:23.050 18:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:23.050 18:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1608597' 00:28:23.050 killing process with pid 1608597 00:28:23.050 18:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1608597 00:28:23.050 18:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1608597 00:28:23.309 18:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:23.309 18:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:23.309 18:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:23.309 18:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:28:23.309 18:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:28:23.309 18:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:23.309 18:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:28:23.309 18:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.309 18:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:23.309 18:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.309 18:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.309 18:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:25.848 00:28:25.848 real 0m42.839s 00:28:25.848 user 0m55.860s 00:28:25.848 sys 0m8.472s 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:25.848 ************************************ 00:28:25.848 END TEST nvmf_lvs_grow 00:28:25.848 ************************************ 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:25.848 ************************************ 00:28:25.848 START TEST nvmf_bdev_io_wait 00:28:25.848 ************************************ 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:25.848 * Looking for test storage... 00:28:25.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:28:25.848 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:25.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.849 --rc genhtml_branch_coverage=1 00:28:25.849 --rc genhtml_function_coverage=1 00:28:25.849 --rc genhtml_legend=1 00:28:25.849 --rc geninfo_all_blocks=1 00:28:25.849 --rc geninfo_unexecuted_blocks=1 00:28:25.849 00:28:25.849 ' 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:25.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.849 --rc genhtml_branch_coverage=1 00:28:25.849 --rc genhtml_function_coverage=1 00:28:25.849 --rc genhtml_legend=1 00:28:25.849 --rc geninfo_all_blocks=1 00:28:25.849 --rc geninfo_unexecuted_blocks=1 00:28:25.849 00:28:25.849 ' 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:25.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.849 --rc genhtml_branch_coverage=1 00:28:25.849 --rc genhtml_function_coverage=1 00:28:25.849 --rc genhtml_legend=1 00:28:25.849 --rc geninfo_all_blocks=1 00:28:25.849 --rc geninfo_unexecuted_blocks=1 00:28:25.849 00:28:25.849 ' 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:25.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.849 --rc genhtml_branch_coverage=1 00:28:25.849 --rc genhtml_function_coverage=1 00:28:25.849 --rc genhtml_legend=1 00:28:25.849 --rc geninfo_all_blocks=1 00:28:25.849 --rc geninfo_unexecuted_blocks=1 00:28:25.849 00:28:25.849 ' 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:28:25.849 18:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:27.755 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:27.756 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:27.756 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:27.756 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:27.756 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:27.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:27.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:28:27.756 00:28:27.756 --- 10.0.0.2 ping statistics --- 00:28:27.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.756 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:27.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:27.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:28:27.756 00:28:27.756 --- 10.0.0.1 ping statistics --- 00:28:27.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.756 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1611195 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1611195 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1611195 ']' 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:27.756 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.757 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:27.757 18:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:28.016 [2024-12-09 18:17:50.834178] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:28.016 [2024-12-09 18:17:50.835231] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:28:28.016 [2024-12-09 18:17:50.835287] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.016 [2024-12-09 18:17:50.908917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:28.017 [2024-12-09 18:17:50.967413] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.017 [2024-12-09 18:17:50.967465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.017 [2024-12-09 18:17:50.967488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:28.017 [2024-12-09 18:17:50.967498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:28.017 [2024-12-09 18:17:50.967507] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.017 [2024-12-09 18:17:50.969103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.017 [2024-12-09 18:17:50.969211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:28.017 [2024-12-09 18:17:50.969305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:28.017 [2024-12-09 18:17:50.969313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.017 [2024-12-09 18:17:50.969953] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:28.275 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:28.275 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:28:28.275 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:28.275 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:28.275 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:28.275 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:28.275 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:28.276 [2024-12-09 18:17:51.149223] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:28.276 [2024-12-09 18:17:51.149453] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:28.276 [2024-12-09 18:17:51.150335] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:28.276 [2024-12-09 18:17:51.151146] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:28.276 [2024-12-09 18:17:51.158133] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:28.276 Malloc0 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:28.276 [2024-12-09 18:17:51.214334] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1611273 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1611274 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1611277 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:28.276 { 00:28:28.276 "params": { 00:28:28.276 "name": "Nvme$subsystem", 00:28:28.276 "trtype": "$TEST_TRANSPORT", 00:28:28.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.276 "adrfam": "ipv4", 00:28:28.276 "trsvcid": "$NVMF_PORT", 00:28:28.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.276 "hdgst": ${hdgst:-false}, 00:28:28.276 "ddgst": ${ddgst:-false} 00:28:28.276 }, 00:28:28.276 "method": "bdev_nvme_attach_controller" 00:28:28.276 } 00:28:28.276 EOF 00:28:28.276 )") 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1611279 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:28.276 { 00:28:28.276 "params": { 00:28:28.276 "name": "Nvme$subsystem", 00:28:28.276 "trtype": "$TEST_TRANSPORT", 00:28:28.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.276 "adrfam": "ipv4", 00:28:28.276 "trsvcid": "$NVMF_PORT", 00:28:28.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.276 "hdgst": ${hdgst:-false}, 00:28:28.276 "ddgst": ${ddgst:-false} 00:28:28.276 }, 00:28:28.276 "method": "bdev_nvme_attach_controller" 00:28:28.276 } 00:28:28.276 EOF 00:28:28.276 )") 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:28.276 { 00:28:28.276 "params": { 00:28:28.276 "name": "Nvme$subsystem", 00:28:28.276 "trtype": "$TEST_TRANSPORT", 00:28:28.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.276 "adrfam": "ipv4", 00:28:28.276 "trsvcid": "$NVMF_PORT", 00:28:28.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.276 "hdgst": ${hdgst:-false}, 00:28:28.276 "ddgst": ${ddgst:-false} 00:28:28.276 }, 00:28:28.276 "method": "bdev_nvme_attach_controller" 00:28:28.276 } 00:28:28.276 EOF 00:28:28.276 )") 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:28.276 { 00:28:28.276 "params": { 00:28:28.276 "name": "Nvme$subsystem", 00:28:28.276 "trtype": "$TEST_TRANSPORT", 00:28:28.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.276 "adrfam": "ipv4", 00:28:28.276 "trsvcid": "$NVMF_PORT", 00:28:28.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.276 "hdgst": ${hdgst:-false}, 00:28:28.276 "ddgst": ${ddgst:-false} 00:28:28.276 }, 00:28:28.276 "method": "bdev_nvme_attach_controller" 00:28:28.276 } 00:28:28.276 EOF 00:28:28.276 )") 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1611273 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:28.276 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:28.277 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:28.277 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:28.277 "params": { 00:28:28.277 "name": "Nvme1", 00:28:28.277 "trtype": "tcp", 00:28:28.277 "traddr": "10.0.0.2", 00:28:28.277 "adrfam": "ipv4", 00:28:28.277 "trsvcid": "4420", 00:28:28.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:28.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:28.277 "hdgst": false, 00:28:28.277 "ddgst": false 00:28:28.277 }, 00:28:28.277 "method": "bdev_nvme_attach_controller" 00:28:28.277 }' 00:28:28.277 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:28.277 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:28.277 "params": { 00:28:28.277 "name": "Nvme1", 00:28:28.277 "trtype": "tcp", 00:28:28.277 "traddr": "10.0.0.2", 00:28:28.277 "adrfam": "ipv4", 00:28:28.277 "trsvcid": "4420", 00:28:28.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:28.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:28.277 "hdgst": false, 00:28:28.277 "ddgst": false 00:28:28.277 }, 00:28:28.277 "method": "bdev_nvme_attach_controller" 00:28:28.277 }' 00:28:28.277 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:28.277 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:28.277 "params": { 00:28:28.277 "name": "Nvme1", 00:28:28.277 "trtype": "tcp", 00:28:28.277 "traddr": "10.0.0.2", 00:28:28.277 "adrfam": "ipv4", 00:28:28.277 "trsvcid": "4420", 00:28:28.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:28.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:28.277 "hdgst": false, 00:28:28.277 "ddgst": false 00:28:28.277 }, 00:28:28.277 "method": "bdev_nvme_attach_controller" 00:28:28.277 }' 00:28:28.277 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:28.277 18:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:28.277 "params": { 00:28:28.277 "name": "Nvme1", 00:28:28.277 "trtype": "tcp", 00:28:28.277 "traddr": "10.0.0.2", 00:28:28.277 "adrfam": "ipv4", 00:28:28.277 "trsvcid": "4420", 00:28:28.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:28.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:28.277 "hdgst": false, 00:28:28.277 "ddgst": false 00:28:28.277 }, 00:28:28.277 "method": "bdev_nvme_attach_controller" 00:28:28.277 }' 00:28:28.277 [2024-12-09 18:17:51.266655] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:28:28.277 [2024-12-09 18:17:51.266728] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:28:28.277 [2024-12-09 18:17:51.267677] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:28:28.277 [2024-12-09 18:17:51.267676] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:28:28.277 [2024-12-09 18:17:51.267678] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:28:28.277 [2024-12-09 18:17:51.267756] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 18:17:51.267757] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 18:17:51.267758] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:28:28.277 --proc-type=auto ] 00:28:28.277 --proc-type=auto ] 00:28:28.535 [2024-12-09 18:17:51.452048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.535 [2024-12-09 18:17:51.505096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:28.535 [2024-12-09 18:17:51.549598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.794 [2024-12-09 18:17:51.603667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:28.794 [2024-12-09 18:17:51.676267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.794 [2024-12-09 18:17:51.728581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.794 [2024-12-09 18:17:51.736414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:28.794 [2024-12-09 18:17:51.779527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:29.052 Running I/O for 1 seconds... 00:28:29.052 Running I/O for 1 seconds... 00:28:29.052 Running I/O for 1 seconds... 00:28:29.052 Running I/O for 1 seconds... 00:28:29.986 189528.00 IOPS, 740.34 MiB/s 00:28:29.986 Latency(us) 00:28:29.986 [2024-12-09T17:17:53.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.986 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:28:29.986 Nvme1n1 : 1.00 189176.38 738.97 0.00 0.00 672.81 282.17 1844.72 00:28:29.986 [2024-12-09T17:17:53.027Z] =================================================================================================================== 00:28:29.986 [2024-12-09T17:17:53.027Z] Total : 189176.38 738.97 0.00 0.00 672.81 282.17 1844.72 00:28:29.986 7088.00 IOPS, 27.69 MiB/s 00:28:29.986 Latency(us) 00:28:29.986 [2024-12-09T17:17:53.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.986 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:28:29.986 Nvme1n1 : 1.02 7116.13 27.80 0.00 0.00 17911.10 4369.07 32622.36 00:28:29.986 [2024-12-09T17:17:53.027Z] =================================================================================================================== 00:28:29.986 [2024-12-09T17:17:53.027Z] Total : 7116.13 27.80 0.00 0.00 17911.10 4369.07 32622.36 00:28:29.986 9458.00 IOPS, 36.95 MiB/s 00:28:29.986 Latency(us) 00:28:29.986 [2024-12-09T17:17:53.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.986 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:28:29.986 Nvme1n1 : 1.01 9499.06 37.11 0.00 0.00 13409.26 4951.61 18252.99 00:28:29.986 [2024-12-09T17:17:53.027Z] =================================================================================================================== 00:28:29.986 [2024-12-09T17:17:53.027Z] Total : 9499.06 37.11 0.00 0.00 13409.26 4951.61 18252.99 00:28:30.245 6646.00 IOPS, 25.96 MiB/s 00:28:30.245 Latency(us) 00:28:30.245 [2024-12-09T17:17:53.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.245 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:28:30.245 Nvme1n1 : 1.01 6758.47 26.40 0.00 0.00 18878.92 4708.88 35923.44 00:28:30.245 [2024-12-09T17:17:53.286Z] =================================================================================================================== 00:28:30.245 [2024-12-09T17:17:53.286Z] Total : 6758.47 26.40 0.00 0.00 18878.92 4708.88 35923.44 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1611274 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1611277 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1611279 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:30.245 rmmod nvme_tcp 00:28:30.245 rmmod nvme_fabrics 00:28:30.245 rmmod nvme_keyring 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1611195 ']' 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1611195 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1611195 ']' 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1611195 00:28:30.245 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:28:30.504 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:30.504 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1611195 00:28:30.504 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:30.504 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:30.504 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1611195' 00:28:30.504 killing process with pid 1611195 00:28:30.504 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1611195 00:28:30.504 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1611195 00:28:30.504 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:30.504 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:30.504 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:30.504 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:28:30.504 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:28:30.504 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:30.504 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:28:30.504 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:30.504 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:30.504 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.504 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.504 18:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:33.040 00:28:33.040 real 0m7.257s 00:28:33.040 user 0m14.511s 00:28:33.040 sys 0m4.004s 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:33.040 ************************************ 00:28:33.040 END TEST nvmf_bdev_io_wait 00:28:33.040 ************************************ 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:33.040 ************************************ 00:28:33.040 START TEST nvmf_queue_depth 00:28:33.040 ************************************ 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:28:33.040 * Looking for test storage... 00:28:33.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:33.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.040 --rc genhtml_branch_coverage=1 00:28:33.040 --rc genhtml_function_coverage=1 00:28:33.040 --rc genhtml_legend=1 00:28:33.040 --rc geninfo_all_blocks=1 00:28:33.040 --rc geninfo_unexecuted_blocks=1 00:28:33.040 00:28:33.040 ' 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:33.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.040 --rc genhtml_branch_coverage=1 00:28:33.040 --rc genhtml_function_coverage=1 00:28:33.040 --rc genhtml_legend=1 00:28:33.040 --rc geninfo_all_blocks=1 00:28:33.040 --rc geninfo_unexecuted_blocks=1 00:28:33.040 00:28:33.040 ' 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:33.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.040 --rc genhtml_branch_coverage=1 00:28:33.040 --rc genhtml_function_coverage=1 00:28:33.040 --rc genhtml_legend=1 00:28:33.040 --rc geninfo_all_blocks=1 00:28:33.040 --rc geninfo_unexecuted_blocks=1 00:28:33.040 00:28:33.040 ' 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:33.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.040 --rc genhtml_branch_coverage=1 00:28:33.040 --rc genhtml_function_coverage=1 00:28:33.040 --rc genhtml_legend=1 00:28:33.040 --rc geninfo_all_blocks=1 00:28:33.040 --rc geninfo_unexecuted_blocks=1 00:28:33.040 00:28:33.040 ' 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.040 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:28:33.041 18:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:34.947 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:34.947 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:34.947 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:34.947 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:34.947 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:34.948 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:34.948 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:34.948 18:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:35.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:35.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:28:35.206 00:28:35.206 --- 10.0.0.2 ping statistics --- 00:28:35.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.206 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:35.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:35.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:28:35.206 00:28:35.206 --- 10.0.0.1 ping statistics --- 00:28:35.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.206 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1613501 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1613501 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1613501 ']' 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:35.206 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:35.206 [2024-12-09 18:17:58.149991] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:35.206 [2024-12-09 18:17:58.151058] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:28:35.206 [2024-12-09 18:17:58.151128] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:35.206 [2024-12-09 18:17:58.229585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.465 [2024-12-09 18:17:58.283857] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:35.465 [2024-12-09 18:17:58.283916] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:35.465 [2024-12-09 18:17:58.283939] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:35.465 [2024-12-09 18:17:58.283950] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:35.465 [2024-12-09 18:17:58.283959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:35.465 [2024-12-09 18:17:58.284554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.465 [2024-12-09 18:17:58.369753] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:35.465 [2024-12-09 18:17:58.370043] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:35.465 [2024-12-09 18:17:58.421141] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:35.465 Malloc0 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:35.465 [2024-12-09 18:17:58.489223] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1613520 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1613520 /var/tmp/bdevperf.sock 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1613520 ']' 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:35.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:35.465 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:35.724 [2024-12-09 18:17:58.535384] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:28:35.724 [2024-12-09 18:17:58.535448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613520 ] 00:28:35.724 [2024-12-09 18:17:58.600256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.724 [2024-12-09 18:17:58.657009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.983 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.983 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:28:35.983 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:35.983 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.983 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:35.983 NVMe0n1 00:28:35.983 18:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.983 18:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:36.242 Running I/O for 10 seconds... 00:28:38.115 8192.00 IOPS, 32.00 MiB/s [2024-12-09T17:18:02.531Z] 8366.00 IOPS, 32.68 MiB/s [2024-12-09T17:18:03.151Z] 8533.33 IOPS, 33.33 MiB/s [2024-12-09T17:18:04.162Z] 8494.75 IOPS, 33.18 MiB/s [2024-12-09T17:18:05.545Z] 8594.60 IOPS, 33.57 MiB/s [2024-12-09T17:18:06.486Z] 8535.50 IOPS, 33.34 MiB/s [2024-12-09T17:18:07.426Z] 8573.43 IOPS, 33.49 MiB/s [2024-12-09T17:18:08.363Z] 8577.38 IOPS, 33.51 MiB/s [2024-12-09T17:18:09.299Z] 8599.89 IOPS, 33.59 MiB/s [2024-12-09T17:18:09.299Z] 8603.30 IOPS, 33.61 MiB/s 00:28:46.258 Latency(us) 00:28:46.258 [2024-12-09T17:18:09.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.258 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:28:46.258 Verification LBA range: start 0x0 length 0x4000 00:28:46.258 NVMe0n1 : 10.07 8646.68 33.78 0.00 0.00 117974.45 20971.52 71070.15 00:28:46.258 [2024-12-09T17:18:09.299Z] =================================================================================================================== 00:28:46.258 [2024-12-09T17:18:09.299Z] Total : 8646.68 33.78 0.00 0.00 117974.45 20971.52 71070.15 00:28:46.258 { 00:28:46.258 "results": [ 00:28:46.258 { 00:28:46.258 "job": "NVMe0n1", 00:28:46.258 "core_mask": "0x1", 00:28:46.258 "workload": "verify", 00:28:46.258 "status": "finished", 00:28:46.258 "verify_range": { 00:28:46.258 "start": 0, 00:28:46.258 "length": 16384 00:28:46.258 }, 00:28:46.258 "queue_depth": 1024, 00:28:46.258 "io_size": 4096, 00:28:46.258 "runtime": 10.068262, 00:28:46.258 "iops": 8646.676059880047, 00:28:46.258 "mibps": 33.77607835890643, 00:28:46.258 "io_failed": 0, 00:28:46.258 "io_timeout": 0, 00:28:46.258 "avg_latency_us": 117974.44970570579, 00:28:46.258 "min_latency_us": 20971.52, 00:28:46.258 "max_latency_us": 71070.15111111112 00:28:46.258 } 00:28:46.258 ], 00:28:46.258 "core_count": 1 00:28:46.258 } 00:28:46.258 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1613520 00:28:46.258 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1613520 ']' 00:28:46.258 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1613520 00:28:46.258 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:28:46.258 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:46.258 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1613520 00:28:46.258 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:46.258 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:46.258 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1613520' 00:28:46.258 killing process with pid 1613520 00:28:46.258 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1613520 00:28:46.258 Received shutdown signal, test time was about 10.000000 seconds 00:28:46.258 00:28:46.258 Latency(us) 00:28:46.258 [2024-12-09T17:18:09.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.259 [2024-12-09T17:18:09.300Z] =================================================================================================================== 00:28:46.259 [2024-12-09T17:18:09.300Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:46.259 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1613520 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:46.517 rmmod nvme_tcp 00:28:46.517 rmmod nvme_fabrics 00:28:46.517 rmmod nvme_keyring 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1613501 ']' 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1613501 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1613501 ']' 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1613501 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1613501 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1613501' 00:28:46.517 killing process with pid 1613501 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1613501 00:28:46.517 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1613501 00:28:46.776 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:46.776 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:46.776 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:46.776 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:28:46.776 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:28:46.776 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:46.776 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:28:46.776 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:46.776 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:46.776 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.776 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.776 18:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.317 18:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:49.317 00:28:49.317 real 0m16.194s 00:28:49.317 user 0m22.194s 00:28:49.317 sys 0m3.395s 00:28:49.317 18:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:49.317 18:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:49.317 ************************************ 00:28:49.317 END TEST nvmf_queue_depth 00:28:49.317 ************************************ 00:28:49.317 18:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:28:49.317 18:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:49.317 18:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:49.317 18:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:49.317 ************************************ 00:28:49.317 START TEST nvmf_target_multipath 00:28:49.317 ************************************ 00:28:49.317 18:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:28:49.317 * Looking for test storage... 00:28:49.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:49.317 18:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:49.317 18:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:28:49.317 18:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:49.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.317 --rc genhtml_branch_coverage=1 00:28:49.317 --rc genhtml_function_coverage=1 00:28:49.317 --rc genhtml_legend=1 00:28:49.317 --rc geninfo_all_blocks=1 00:28:49.317 --rc geninfo_unexecuted_blocks=1 00:28:49.317 00:28:49.317 ' 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:49.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.317 --rc genhtml_branch_coverage=1 00:28:49.317 --rc genhtml_function_coverage=1 00:28:49.317 --rc genhtml_legend=1 00:28:49.317 --rc geninfo_all_blocks=1 00:28:49.317 --rc geninfo_unexecuted_blocks=1 00:28:49.317 00:28:49.317 ' 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:49.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.317 --rc genhtml_branch_coverage=1 00:28:49.317 --rc genhtml_function_coverage=1 00:28:49.317 --rc genhtml_legend=1 00:28:49.317 --rc geninfo_all_blocks=1 00:28:49.317 --rc geninfo_unexecuted_blocks=1 00:28:49.317 00:28:49.317 ' 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:49.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.317 --rc genhtml_branch_coverage=1 00:28:49.317 --rc genhtml_function_coverage=1 00:28:49.317 --rc genhtml_legend=1 00:28:49.317 --rc geninfo_all_blocks=1 00:28:49.317 --rc geninfo_unexecuted_blocks=1 00:28:49.317 00:28:49.317 ' 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.317 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:28:49.318 18:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:51.221 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:51.221 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:28:51.221 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:51.221 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:51.221 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:51.221 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:51.221 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:51.221 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:28:51.221 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:51.221 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:28:51.221 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:28:51.221 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:28:51.221 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:28:51.221 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:28:51.221 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:28:51.221 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:51.222 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:51.222 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:51.222 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:51.222 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:51.222 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:51.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:51.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:28:51.483 00:28:51.483 --- 10.0.0.2 ping statistics --- 00:28:51.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.483 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:51.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:51.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:28:51.483 00:28:51.483 --- 10.0.0.1 ping statistics --- 00:28:51.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.483 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:28:51.483 only one NIC for nvmf test 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:51.483 rmmod nvme_tcp 00:28:51.483 rmmod nvme_fabrics 00:28:51.483 rmmod nvme_keyring 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.483 18:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.393 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:53.393 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:28:53.393 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:28:53.393 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:53.394 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:28:53.394 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:53.394 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:28:53.394 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:53.394 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:53.394 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:53.394 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:28:53.394 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:28:53.394 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:53.394 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:53.394 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:53.394 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:53.394 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:28:53.394 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:28:53.394 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:53.394 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:28:53.394 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:53.394 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:53.394 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.394 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.394 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.652 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:53.652 00:28:53.652 real 0m4.557s 00:28:53.652 user 0m0.929s 00:28:53.652 sys 0m1.636s 00:28:53.652 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:53.652 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:53.652 ************************************ 00:28:53.652 END TEST nvmf_target_multipath 00:28:53.652 ************************************ 00:28:53.652 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:28:53.652 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:53.652 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:53.652 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:53.652 ************************************ 00:28:53.652 START TEST nvmf_zcopy 00:28:53.652 ************************************ 00:28:53.652 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:28:53.652 * Looking for test storage... 00:28:53.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:53.652 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:53.652 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:28:53.652 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:53.652 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:53.652 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:53.652 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:53.652 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:53.652 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:28:53.652 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:28:53.652 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:28:53.652 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:28:53.652 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:53.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.653 --rc genhtml_branch_coverage=1 00:28:53.653 --rc genhtml_function_coverage=1 00:28:53.653 --rc genhtml_legend=1 00:28:53.653 --rc geninfo_all_blocks=1 00:28:53.653 --rc geninfo_unexecuted_blocks=1 00:28:53.653 00:28:53.653 ' 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:53.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.653 --rc genhtml_branch_coverage=1 00:28:53.653 --rc genhtml_function_coverage=1 00:28:53.653 --rc genhtml_legend=1 00:28:53.653 --rc geninfo_all_blocks=1 00:28:53.653 --rc geninfo_unexecuted_blocks=1 00:28:53.653 00:28:53.653 ' 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:53.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.653 --rc genhtml_branch_coverage=1 00:28:53.653 --rc genhtml_function_coverage=1 00:28:53.653 --rc genhtml_legend=1 00:28:53.653 --rc geninfo_all_blocks=1 00:28:53.653 --rc geninfo_unexecuted_blocks=1 00:28:53.653 00:28:53.653 ' 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:53.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.653 --rc genhtml_branch_coverage=1 00:28:53.653 --rc genhtml_function_coverage=1 00:28:53.653 --rc genhtml_legend=1 00:28:53.653 --rc geninfo_all_blocks=1 00:28:53.653 --rc geninfo_unexecuted_blocks=1 00:28:53.653 00:28:53.653 ' 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:53.653 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:28:53.654 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:53.654 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:53.654 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:53.654 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:53.654 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:53.654 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.654 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.654 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.654 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:53.654 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:53.654 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:28:53.654 18:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:56.186 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:56.186 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:56.186 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:56.187 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:56.187 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:56.187 18:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:56.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:56.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:28:56.187 00:28:56.187 --- 10.0.0.2 ping statistics --- 00:28:56.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.187 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:56.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:56.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:28:56.187 00:28:56.187 --- 10.0.0.1 ping statistics --- 00:28:56.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.187 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1619331 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1619331 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1619331 ']' 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:56.187 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:56.187 [2024-12-09 18:18:19.109555] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:56.187 [2024-12-09 18:18:19.110623] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:28:56.187 [2024-12-09 18:18:19.110676] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.187 [2024-12-09 18:18:19.180702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.445 [2024-12-09 18:18:19.235317] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.445 [2024-12-09 18:18:19.235367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.445 [2024-12-09 18:18:19.235386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:56.445 [2024-12-09 18:18:19.235402] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:56.445 [2024-12-09 18:18:19.235416] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.445 [2024-12-09 18:18:19.235976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.446 [2024-12-09 18:18:19.321083] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:56.446 [2024-12-09 18:18:19.321378] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:56.446 [2024-12-09 18:18:19.372577] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:56.446 [2024-12-09 18:18:19.388768] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:56.446 malloc0 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:56.446 { 00:28:56.446 "params": { 00:28:56.446 "name": "Nvme$subsystem", 00:28:56.446 "trtype": "$TEST_TRANSPORT", 00:28:56.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:56.446 "adrfam": "ipv4", 00:28:56.446 "trsvcid": "$NVMF_PORT", 00:28:56.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:56.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:56.446 "hdgst": ${hdgst:-false}, 00:28:56.446 "ddgst": ${ddgst:-false} 00:28:56.446 }, 00:28:56.446 "method": "bdev_nvme_attach_controller" 00:28:56.446 } 00:28:56.446 EOF 00:28:56.446 )") 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:28:56.446 18:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:56.446 "params": { 00:28:56.446 "name": "Nvme1", 00:28:56.446 "trtype": "tcp", 00:28:56.446 "traddr": "10.0.0.2", 00:28:56.446 "adrfam": "ipv4", 00:28:56.446 "trsvcid": "4420", 00:28:56.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:56.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:56.446 "hdgst": false, 00:28:56.446 "ddgst": false 00:28:56.446 }, 00:28:56.446 "method": "bdev_nvme_attach_controller" 00:28:56.446 }' 00:28:56.446 [2024-12-09 18:18:19.470656] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:28:56.446 [2024-12-09 18:18:19.470740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1619358 ] 00:28:56.704 [2024-12-09 18:18:19.541008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.704 [2024-12-09 18:18:19.599449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.962 Running I/O for 10 seconds... 00:28:58.830 5698.00 IOPS, 44.52 MiB/s [2024-12-09T17:18:23.245Z] 5734.50 IOPS, 44.80 MiB/s [2024-12-09T17:18:24.179Z] 5762.00 IOPS, 45.02 MiB/s [2024-12-09T17:18:25.113Z] 5766.25 IOPS, 45.05 MiB/s [2024-12-09T17:18:26.045Z] 5776.60 IOPS, 45.13 MiB/s [2024-12-09T17:18:26.984Z] 5770.50 IOPS, 45.08 MiB/s [2024-12-09T17:18:27.923Z] 5774.57 IOPS, 45.11 MiB/s [2024-12-09T17:18:28.861Z] 5779.88 IOPS, 45.16 MiB/s [2024-12-09T17:18:30.236Z] 5781.22 IOPS, 45.17 MiB/s [2024-12-09T17:18:30.236Z] 5777.70 IOPS, 45.14 MiB/s 00:29:07.195 Latency(us) 00:29:07.195 [2024-12-09T17:18:30.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.195 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:29:07.195 Verification LBA range: start 0x0 length 0x1000 00:29:07.195 Nvme1n1 : 10.05 5758.76 44.99 0.00 0.00 22078.94 4077.80 45049.93 00:29:07.195 [2024-12-09T17:18:30.236Z] =================================================================================================================== 00:29:07.195 [2024-12-09T17:18:30.236Z] Total : 5758.76 44.99 0.00 0.00 22078.94 4077.80 45049.93 00:29:07.195 18:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1620646 00:29:07.195 18:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:29:07.195 18:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:07.195 18:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:29:07.195 18:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:29:07.195 18:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:29:07.195 18:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:29:07.196 18:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:07.196 18:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:07.196 { 00:29:07.196 "params": { 00:29:07.196 "name": "Nvme$subsystem", 00:29:07.196 "trtype": "$TEST_TRANSPORT", 00:29:07.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:07.196 "adrfam": "ipv4", 00:29:07.196 "trsvcid": "$NVMF_PORT", 00:29:07.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:07.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:07.196 "hdgst": ${hdgst:-false}, 00:29:07.196 "ddgst": ${ddgst:-false} 00:29:07.196 }, 00:29:07.196 "method": "bdev_nvme_attach_controller" 00:29:07.196 } 00:29:07.196 EOF 00:29:07.196 )") 00:29:07.196 [2024-12-09 18:18:30.136480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.196 [2024-12-09 18:18:30.136524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.196 18:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:29:07.196 18:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:29:07.196 18:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:29:07.196 18:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:07.196 "params": { 00:29:07.196 "name": "Nvme1", 00:29:07.196 "trtype": "tcp", 00:29:07.196 "traddr": "10.0.0.2", 00:29:07.196 "adrfam": "ipv4", 00:29:07.196 "trsvcid": "4420", 00:29:07.196 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:07.196 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:07.196 "hdgst": false, 00:29:07.196 "ddgst": false 00:29:07.196 }, 00:29:07.196 "method": "bdev_nvme_attach_controller" 00:29:07.196 }' 00:29:07.196 [2024-12-09 18:18:30.144406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.196 [2024-12-09 18:18:30.144429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.196 [2024-12-09 18:18:30.152421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.196 [2024-12-09 18:18:30.152444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.196 [2024-12-09 18:18:30.160402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.196 [2024-12-09 18:18:30.160424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.196 [2024-12-09 18:18:30.168419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.196 [2024-12-09 18:18:30.168441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.196 [2024-12-09 18:18:30.176429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.196 [2024-12-09 18:18:30.176453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.196 [2024-12-09 18:18:30.178295] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:29:07.196 [2024-12-09 18:18:30.178351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1620646 ] 00:29:07.196 [2024-12-09 18:18:30.184438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.196 [2024-12-09 18:18:30.184462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.196 [2024-12-09 18:18:30.192418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.196 [2024-12-09 18:18:30.192441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.196 [2024-12-09 18:18:30.200405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.196 [2024-12-09 18:18:30.200426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.196 [2024-12-09 18:18:30.208402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.196 [2024-12-09 18:18:30.208423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.196 [2024-12-09 18:18:30.216403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.196 [2024-12-09 18:18:30.216423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.196 [2024-12-09 18:18:30.224417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.196 [2024-12-09 18:18:30.224438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.196 [2024-12-09 18:18:30.232403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.196 [2024-12-09 18:18:30.232424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.472 [2024-12-09 18:18:30.240401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.240422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.248416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.248437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.250407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.473 [2024-12-09 18:18:30.256421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.256447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.264448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.264486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.272406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.272428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.280419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.280440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.288402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.288423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.296402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.296423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.304404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.304425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.312095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.473 [2024-12-09 18:18:30.312405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.312426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.320401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.320428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.328448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.328480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.336454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.336491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.344439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.344477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.352441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.352479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.360461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.360500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.368461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.368498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.376407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.376429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.384436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.384469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.392441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.392478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.400443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.400480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.408415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.408443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.416402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.416423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.424424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.424448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.432433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.432460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.440406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.440429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.448417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.448442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.456408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.456431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.464418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.464441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.472420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.472456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.480416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.480437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.488403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.488424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.473 [2024-12-09 18:18:30.496427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.473 [2024-12-09 18:18:30.496466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.504407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.504431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.512418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.512440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.520419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.520441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.528417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.528438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.536416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.536437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.544401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.544436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.552405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.552427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.560402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.560423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.568402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.568423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.576401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.576421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.584401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.584421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.592405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.592426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.600403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.600425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.608402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.608422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.616403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.616425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.624401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.624427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.632401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.632421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.640402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.640424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.648407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.648431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.656404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.656427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 Running I/O for 5 seconds... 00:29:07.788 [2024-12-09 18:18:30.673643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.673671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.690489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.690518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.706404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.706447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.722916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.722956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.738296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.738324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.748130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.748157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.759995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.760022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.771198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.771238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.787818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.787845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:07.788 [2024-12-09 18:18:30.797766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:07.788 [2024-12-09 18:18:30.797794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.064 [2024-12-09 18:18:30.813421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.064 [2024-12-09 18:18:30.813448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.064 [2024-12-09 18:18:30.823465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.064 [2024-12-09 18:18:30.823491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.064 [2024-12-09 18:18:30.838775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.064 [2024-12-09 18:18:30.838802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.064 [2024-12-09 18:18:30.854071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.064 [2024-12-09 18:18:30.854111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.064 [2024-12-09 18:18:30.864112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.064 [2024-12-09 18:18:30.864139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.064 [2024-12-09 18:18:30.875925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.064 [2024-12-09 18:18:30.875949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.064 [2024-12-09 18:18:30.886594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.064 [2024-12-09 18:18:30.886620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.064 [2024-12-09 18:18:30.900986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.064 [2024-12-09 18:18:30.901015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.064 [2024-12-09 18:18:30.911321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.064 [2024-12-09 18:18:30.911349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.064 [2024-12-09 18:18:30.926288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.064 [2024-12-09 18:18:30.926314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.064 [2024-12-09 18:18:30.944347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.064 [2024-12-09 18:18:30.944372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.064 [2024-12-09 18:18:30.954479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.064 [2024-12-09 18:18:30.954506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.064 [2024-12-09 18:18:30.969322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.064 [2024-12-09 18:18:30.969348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.064 [2024-12-09 18:18:30.979701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.064 [2024-12-09 18:18:30.979728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.064 [2024-12-09 18:18:30.991705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.064 [2024-12-09 18:18:30.991732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.064 [2024-12-09 18:18:31.005525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.064 [2024-12-09 18:18:31.005576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.064 [2024-12-09 18:18:31.014709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.064 [2024-12-09 18:18:31.014750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.064 [2024-12-09 18:18:31.029955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.064 [2024-12-09 18:18:31.029981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.064 [2024-12-09 18:18:31.045731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.064 [2024-12-09 18:18:31.045758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.064 [2024-12-09 18:18:31.055561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.064 [2024-12-09 18:18:31.055588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.064 [2024-12-09 18:18:31.069667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.064 [2024-12-09 18:18:31.069694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.065 [2024-12-09 18:18:31.079632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.065 [2024-12-09 18:18:31.079659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.065 [2024-12-09 18:18:31.094907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.065 [2024-12-09 18:18:31.094932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.323 [2024-12-09 18:18:31.108821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.323 [2024-12-09 18:18:31.108849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.323 [2024-12-09 18:18:31.118640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.323 [2024-12-09 18:18:31.118667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.323 [2024-12-09 18:18:31.133138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.323 [2024-12-09 18:18:31.133162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.323 [2024-12-09 18:18:31.143021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.323 [2024-12-09 18:18:31.143048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.323 [2024-12-09 18:18:31.157021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.323 [2024-12-09 18:18:31.157048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.323 [2024-12-09 18:18:31.166959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.323 [2024-12-09 18:18:31.166985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.323 [2024-12-09 18:18:31.180346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.323 [2024-12-09 18:18:31.180389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.323 [2024-12-09 18:18:31.189992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.323 [2024-12-09 18:18:31.190020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.323 [2024-12-09 18:18:31.201787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.323 [2024-12-09 18:18:31.201815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.323 [2024-12-09 18:18:31.217593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.323 [2024-12-09 18:18:31.217621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.323 [2024-12-09 18:18:31.227284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.323 [2024-12-09 18:18:31.227310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.323 [2024-12-09 18:18:31.242146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.323 [2024-12-09 18:18:31.242186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.323 [2024-12-09 18:18:31.252139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.323 [2024-12-09 18:18:31.252165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.323 [2024-12-09 18:18:31.264224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.323 [2024-12-09 18:18:31.264250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.323 [2024-12-09 18:18:31.274916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.323 [2024-12-09 18:18:31.274956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.323 [2024-12-09 18:18:31.290593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.323 [2024-12-09 18:18:31.290620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.323 [2024-12-09 18:18:31.308491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.323 [2024-12-09 18:18:31.308531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.323 [2024-12-09 18:18:31.318518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.323 [2024-12-09 18:18:31.318569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.323 [2024-12-09 18:18:31.330072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.323 [2024-12-09 18:18:31.330112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.323 [2024-12-09 18:18:31.348222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.323 [2024-12-09 18:18:31.348248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.323 [2024-12-09 18:18:31.358219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.323 [2024-12-09 18:18:31.358245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.581 [2024-12-09 18:18:31.373283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.581 [2024-12-09 18:18:31.373310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.581 [2024-12-09 18:18:31.383152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.581 [2024-12-09 18:18:31.383179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.581 [2024-12-09 18:18:31.397874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.581 [2024-12-09 18:18:31.397914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.581 [2024-12-09 18:18:31.413396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.581 [2024-12-09 18:18:31.413422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.581 [2024-12-09 18:18:31.423526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.581 [2024-12-09 18:18:31.423577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.581 [2024-12-09 18:18:31.438144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.581 [2024-12-09 18:18:31.438170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.581 [2024-12-09 18:18:31.456316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.581 [2024-12-09 18:18:31.456344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.581 [2024-12-09 18:18:31.466724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.581 [2024-12-09 18:18:31.466750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.581 [2024-12-09 18:18:31.483010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.581 [2024-12-09 18:18:31.483035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.581 [2024-12-09 18:18:31.497818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.581 [2024-12-09 18:18:31.497855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.581 [2024-12-09 18:18:31.507539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.581 [2024-12-09 18:18:31.507578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.581 [2024-12-09 18:18:31.523061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.581 [2024-12-09 18:18:31.523087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.581 [2024-12-09 18:18:31.538841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.581 [2024-12-09 18:18:31.538882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.581 [2024-12-09 18:18:31.554860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.581 [2024-12-09 18:18:31.554887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.581 [2024-12-09 18:18:31.569960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.581 [2024-12-09 18:18:31.569988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.581 [2024-12-09 18:18:31.588308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.581 [2024-12-09 18:18:31.588351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.581 [2024-12-09 18:18:31.598138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.581 [2024-12-09 18:18:31.598180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.581 [2024-12-09 18:18:31.609967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.581 [2024-12-09 18:18:31.609992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.841 [2024-12-09 18:18:31.626334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.841 [2024-12-09 18:18:31.626360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.841 [2024-12-09 18:18:31.643855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.841 [2024-12-09 18:18:31.643881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.841 [2024-12-09 18:18:31.653601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.841 [2024-12-09 18:18:31.653629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.841 11482.00 IOPS, 89.70 MiB/s [2024-12-09T17:18:31.882Z] [2024-12-09 18:18:31.665406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.841 [2024-12-09 18:18:31.665432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.841 [2024-12-09 18:18:31.679947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.841 [2024-12-09 18:18:31.679991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.841 [2024-12-09 18:18:31.689372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.841 [2024-12-09 18:18:31.689397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.841 [2024-12-09 18:18:31.701446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.841 [2024-12-09 18:18:31.701472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.841 [2024-12-09 18:18:31.717264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.841 [2024-12-09 18:18:31.717290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.841 [2024-12-09 18:18:31.727395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.841 [2024-12-09 18:18:31.727421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.841 [2024-12-09 18:18:31.743512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.841 [2024-12-09 18:18:31.743562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.841 [2024-12-09 18:18:31.758109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.841 [2024-12-09 18:18:31.758137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.841 [2024-12-09 18:18:31.776571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.841 [2024-12-09 18:18:31.776612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.841 [2024-12-09 18:18:31.786284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.841 [2024-12-09 18:18:31.786309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.841 [2024-12-09 18:18:31.798517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.841 [2024-12-09 18:18:31.798541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.841 [2024-12-09 18:18:31.814617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.841 [2024-12-09 18:18:31.814643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.841 [2024-12-09 18:18:31.830704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.841 [2024-12-09 18:18:31.830731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.841 [2024-12-09 18:18:31.848465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.841 [2024-12-09 18:18:31.848491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.841 [2024-12-09 18:18:31.857877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.841 [2024-12-09 18:18:31.857928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.841 [2024-12-09 18:18:31.869491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:08.841 [2024-12-09 18:18:31.869518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.100 [2024-12-09 18:18:31.880330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.100 [2024-12-09 18:18:31.880356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.100 [2024-12-09 18:18:31.891495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.100 [2024-12-09 18:18:31.891520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.100 [2024-12-09 18:18:31.902432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.100 [2024-12-09 18:18:31.902458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.100 [2024-12-09 18:18:31.918474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.100 [2024-12-09 18:18:31.918499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.100 [2024-12-09 18:18:31.936703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.100 [2024-12-09 18:18:31.936729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.100 [2024-12-09 18:18:31.946985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.100 [2024-12-09 18:18:31.947012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.100 [2024-12-09 18:18:31.961739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.100 [2024-12-09 18:18:31.961767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.100 [2024-12-09 18:18:31.978206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.100 [2024-12-09 18:18:31.978231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.100 [2024-12-09 18:18:31.996353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.100 [2024-12-09 18:18:31.996380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.100 [2024-12-09 18:18:32.006945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.100 [2024-12-09 18:18:32.006970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.100 [2024-12-09 18:18:32.020700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.100 [2024-12-09 18:18:32.020744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.100 [2024-12-09 18:18:32.030581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.100 [2024-12-09 18:18:32.030609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.100 [2024-12-09 18:18:32.044600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.100 [2024-12-09 18:18:32.044628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.100 [2024-12-09 18:18:32.054099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.100 [2024-12-09 18:18:32.054126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.100 [2024-12-09 18:18:32.065760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.100 [2024-12-09 18:18:32.065788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.100 [2024-12-09 18:18:32.076491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.100 [2024-12-09 18:18:32.076517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.100 [2024-12-09 18:18:32.087521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.100 [2024-12-09 18:18:32.087557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.100 [2024-12-09 18:18:32.100966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.100 [2024-12-09 18:18:32.101001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.100 [2024-12-09 18:18:32.110309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.100 [2024-12-09 18:18:32.110336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.100 [2024-12-09 18:18:32.122035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.100 [2024-12-09 18:18:32.122062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.100 [2024-12-09 18:18:32.138438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.100 [2024-12-09 18:18:32.138465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.359 [2024-12-09 18:18:32.148268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.359 [2024-12-09 18:18:32.148295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.359 [2024-12-09 18:18:32.159705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.359 [2024-12-09 18:18:32.159732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.359 [2024-12-09 18:18:32.170407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.359 [2024-12-09 18:18:32.170432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.359 [2024-12-09 18:18:32.186606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.359 [2024-12-09 18:18:32.186650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.359 [2024-12-09 18:18:32.202789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.359 [2024-12-09 18:18:32.202831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.359 [2024-12-09 18:18:32.218770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.359 [2024-12-09 18:18:32.218800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.359 [2024-12-09 18:18:32.234959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.359 [2024-12-09 18:18:32.235005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.359 [2024-12-09 18:18:32.251054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.359 [2024-12-09 18:18:32.251081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.359 [2024-12-09 18:18:32.266529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.359 [2024-12-09 18:18:32.266578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.359 [2024-12-09 18:18:32.283147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.359 [2024-12-09 18:18:32.283172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.359 [2024-12-09 18:18:32.297606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.359 [2024-12-09 18:18:32.297634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.359 [2024-12-09 18:18:32.307614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.359 [2024-12-09 18:18:32.307640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.359 [2024-12-09 18:18:32.322540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.359 [2024-12-09 18:18:32.322576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.359 [2024-12-09 18:18:32.336155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.359 [2024-12-09 18:18:32.336183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.359 [2024-12-09 18:18:32.345813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.359 [2024-12-09 18:18:32.345841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.359 [2024-12-09 18:18:32.361938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.359 [2024-12-09 18:18:32.361973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.359 [2024-12-09 18:18:32.371953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.359 [2024-12-09 18:18:32.371979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.359 [2024-12-09 18:18:32.383633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.359 [2024-12-09 18:18:32.383660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.359 [2024-12-09 18:18:32.394952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.359 [2024-12-09 18:18:32.394977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.617 [2024-12-09 18:18:32.410961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.617 [2024-12-09 18:18:32.410987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.617 [2024-12-09 18:18:32.426937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.617 [2024-12-09 18:18:32.426963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.617 [2024-12-09 18:18:32.442720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.617 [2024-12-09 18:18:32.442750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.617 [2024-12-09 18:18:32.458838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.617 [2024-12-09 18:18:32.458880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.617 [2024-12-09 18:18:32.473951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.617 [2024-12-09 18:18:32.473980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.617 [2024-12-09 18:18:32.483268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.617 [2024-12-09 18:18:32.483294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.617 [2024-12-09 18:18:32.497196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.617 [2024-12-09 18:18:32.497221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.617 [2024-12-09 18:18:32.506994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.617 [2024-12-09 18:18:32.507020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.617 [2024-12-09 18:18:32.522410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.617 [2024-12-09 18:18:32.522436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.617 [2024-12-09 18:18:32.538337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.617 [2024-12-09 18:18:32.538377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.617 [2024-12-09 18:18:32.548469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.617 [2024-12-09 18:18:32.548496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.617 [2024-12-09 18:18:32.560482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.617 [2024-12-09 18:18:32.560509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.617 [2024-12-09 18:18:32.571433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.617 [2024-12-09 18:18:32.571457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.617 [2024-12-09 18:18:32.585638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.617 [2024-12-09 18:18:32.585667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.617 [2024-12-09 18:18:32.595338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.617 [2024-12-09 18:18:32.595364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.617 [2024-12-09 18:18:32.609589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.617 [2024-12-09 18:18:32.609630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.617 [2024-12-09 18:18:32.619218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.617 [2024-12-09 18:18:32.619243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.617 [2024-12-09 18:18:32.633329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.617 [2024-12-09 18:18:32.633354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.617 [2024-12-09 18:18:32.643914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.617 [2024-12-09 18:18:32.643939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.617 [2024-12-09 18:18:32.655776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.617 [2024-12-09 18:18:32.655802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.877 11501.00 IOPS, 89.85 MiB/s [2024-12-09T17:18:32.918Z] [2024-12-09 18:18:32.666745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.877 [2024-12-09 18:18:32.666771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.877 [2024-12-09 18:18:32.682874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.877 [2024-12-09 18:18:32.682914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.877 [2024-12-09 18:18:32.698569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.877 [2024-12-09 18:18:32.698602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.877 [2024-12-09 18:18:32.714945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.877 [2024-12-09 18:18:32.714971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.877 [2024-12-09 18:18:32.730646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.877 [2024-12-09 18:18:32.730689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.877 [2024-12-09 18:18:32.746532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.877 [2024-12-09 18:18:32.746598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.877 [2024-12-09 18:18:32.762255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.877 [2024-12-09 18:18:32.762299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.877 [2024-12-09 18:18:32.780670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.877 [2024-12-09 18:18:32.780699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.877 [2024-12-09 18:18:32.791232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.877 [2024-12-09 18:18:32.791259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.877 [2024-12-09 18:18:32.805877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.877 [2024-12-09 18:18:32.805905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.877 [2024-12-09 18:18:32.824701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.877 [2024-12-09 18:18:32.824730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.877 [2024-12-09 18:18:32.834880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.877 [2024-12-09 18:18:32.834908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.877 [2024-12-09 18:18:32.851204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.877 [2024-12-09 18:18:32.851230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.877 [2024-12-09 18:18:32.865905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.877 [2024-12-09 18:18:32.865933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.877 [2024-12-09 18:18:32.884230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.877 [2024-12-09 18:18:32.884257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.877 [2024-12-09 18:18:32.893980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.877 [2024-12-09 18:18:32.894007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.878 [2024-12-09 18:18:32.910104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.878 [2024-12-09 18:18:32.910130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.137 [2024-12-09 18:18:32.928215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.137 [2024-12-09 18:18:32.928242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.137 [2024-12-09 18:18:32.938883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.137 [2024-12-09 18:18:32.938923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.137 [2024-12-09 18:18:32.952766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.137 [2024-12-09 18:18:32.952794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.137 [2024-12-09 18:18:32.962582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.137 [2024-12-09 18:18:32.962610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.137 [2024-12-09 18:18:32.978714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.137 [2024-12-09 18:18:32.978743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.137 [2024-12-09 18:18:32.994958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.137 [2024-12-09 18:18:32.994985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.137 [2024-12-09 18:18:33.011149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.137 [2024-12-09 18:18:33.011176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.137 [2024-12-09 18:18:33.023502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.137 [2024-12-09 18:18:33.023530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.137 [2024-12-09 18:18:33.036870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.137 [2024-12-09 18:18:33.036897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.137 [2024-12-09 18:18:33.046472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.137 [2024-12-09 18:18:33.046497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.137 [2024-12-09 18:18:33.062562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.137 [2024-12-09 18:18:33.062588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.137 [2024-12-09 18:18:33.077809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.137 [2024-12-09 18:18:33.077837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.137 [2024-12-09 18:18:33.097201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.137 [2024-12-09 18:18:33.097241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.137 [2024-12-09 18:18:33.107443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.137 [2024-12-09 18:18:33.107469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.137 [2024-12-09 18:18:33.121840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.137 [2024-12-09 18:18:33.121881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.137 [2024-12-09 18:18:33.131585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.137 [2024-12-09 18:18:33.131637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.137 [2024-12-09 18:18:33.145265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.137 [2024-12-09 18:18:33.145292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.137 [2024-12-09 18:18:33.154714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.137 [2024-12-09 18:18:33.154742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.137 [2024-12-09 18:18:33.168434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.137 [2024-12-09 18:18:33.168476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.397 [2024-12-09 18:18:33.177992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.397 [2024-12-09 18:18:33.178020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.397 [2024-12-09 18:18:33.194409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.397 [2024-12-09 18:18:33.194452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.397 [2024-12-09 18:18:33.210590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.397 [2024-12-09 18:18:33.210619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.397 [2024-12-09 18:18:33.226167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.397 [2024-12-09 18:18:33.226195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.397 [2024-12-09 18:18:33.244407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.397 [2024-12-09 18:18:33.244433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.397 [2024-12-09 18:18:33.254653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.397 [2024-12-09 18:18:33.254696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.397 [2024-12-09 18:18:33.270082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.397 [2024-12-09 18:18:33.270109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.397 [2024-12-09 18:18:33.288537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.397 [2024-12-09 18:18:33.288598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.397 [2024-12-09 18:18:33.298914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.397 [2024-12-09 18:18:33.298943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.397 [2024-12-09 18:18:33.314973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.397 [2024-12-09 18:18:33.314999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.397 [2024-12-09 18:18:33.330092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.397 [2024-12-09 18:18:33.330120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.397 [2024-12-09 18:18:33.348463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.397 [2024-12-09 18:18:33.348490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.397 [2024-12-09 18:18:33.358688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.397 [2024-12-09 18:18:33.358715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.397 [2024-12-09 18:18:33.373385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.397 [2024-12-09 18:18:33.373413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.397 [2024-12-09 18:18:33.392283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.397 [2024-12-09 18:18:33.392310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.397 [2024-12-09 18:18:33.402389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.397 [2024-12-09 18:18:33.402442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.397 [2024-12-09 18:18:33.414245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.397 [2024-12-09 18:18:33.414271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.397 [2024-12-09 18:18:33.430143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.397 [2024-12-09 18:18:33.430184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.656 [2024-12-09 18:18:33.439914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.656 [2024-12-09 18:18:33.439941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.656 [2024-12-09 18:18:33.451944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.656 [2024-12-09 18:18:33.451969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.656 [2024-12-09 18:18:33.463218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.656 [2024-12-09 18:18:33.463245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.656 [2024-12-09 18:18:33.479412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.656 [2024-12-09 18:18:33.479440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.656 [2024-12-09 18:18:33.489353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.656 [2024-12-09 18:18:33.489379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.656 [2024-12-09 18:18:33.501191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.656 [2024-12-09 18:18:33.501217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.656 [2024-12-09 18:18:33.511634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.656 [2024-12-09 18:18:33.511662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.656 [2024-12-09 18:18:33.525660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.656 [2024-12-09 18:18:33.525687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.656 [2024-12-09 18:18:33.535694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.656 [2024-12-09 18:18:33.535721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.656 [2024-12-09 18:18:33.547378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.656 [2024-12-09 18:18:33.547404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.656 [2024-12-09 18:18:33.562797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.656 [2024-12-09 18:18:33.562824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.656 [2024-12-09 18:18:33.578660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.656 [2024-12-09 18:18:33.578687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.656 [2024-12-09 18:18:33.594618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.656 [2024-12-09 18:18:33.594661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.656 [2024-12-09 18:18:33.612004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.656 [2024-12-09 18:18:33.612029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.656 [2024-12-09 18:18:33.622109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.656 [2024-12-09 18:18:33.622136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.656 [2024-12-09 18:18:33.634241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.656 [2024-12-09 18:18:33.634266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.656 [2024-12-09 18:18:33.650159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.656 [2024-12-09 18:18:33.650213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.656 [2024-12-09 18:18:33.659611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.656 [2024-12-09 18:18:33.659639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.656 11487.33 IOPS, 89.74 MiB/s [2024-12-09T17:18:33.697Z] [2024-12-09 18:18:33.673353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.656 [2024-12-09 18:18:33.673380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.656 [2024-12-09 18:18:33.683203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.656 [2024-12-09 18:18:33.683229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.916 [2024-12-09 18:18:33.699272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.916 [2024-12-09 18:18:33.699300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.916 [2024-12-09 18:18:33.714057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.916 [2024-12-09 18:18:33.714100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.916 [2024-12-09 18:18:33.723669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.916 [2024-12-09 18:18:33.723697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.916 [2024-12-09 18:18:33.735447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.917 [2024-12-09 18:18:33.735474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.917 [2024-12-09 18:18:33.746056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.917 [2024-12-09 18:18:33.746082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.917 [2024-12-09 18:18:33.761534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.917 [2024-12-09 18:18:33.761587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.917 [2024-12-09 18:18:33.771674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.917 [2024-12-09 18:18:33.771703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.917 [2024-12-09 18:18:33.783417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.917 [2024-12-09 18:18:33.783459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.917 [2024-12-09 18:18:33.797728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.917 [2024-12-09 18:18:33.797756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.917 [2024-12-09 18:18:33.807846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.917 [2024-12-09 18:18:33.807886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.917 [2024-12-09 18:18:33.820132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.917 [2024-12-09 18:18:33.820174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.917 [2024-12-09 18:18:33.831455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.917 [2024-12-09 18:18:33.831481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.917 [2024-12-09 18:18:33.842756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.917 [2024-12-09 18:18:33.842784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.917 [2024-12-09 18:18:33.857625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.917 [2024-12-09 18:18:33.857653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.917 [2024-12-09 18:18:33.867538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.917 [2024-12-09 18:18:33.867574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.917 [2024-12-09 18:18:33.879788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.917 [2024-12-09 18:18:33.879816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.917 [2024-12-09 18:18:33.890497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.917 [2024-12-09 18:18:33.890523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.917 [2024-12-09 18:18:33.905666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.917 [2024-12-09 18:18:33.905695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.917 [2024-12-09 18:18:33.915043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.917 [2024-12-09 18:18:33.915069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.917 [2024-12-09 18:18:33.929140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.917 [2024-12-09 18:18:33.929168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.917 [2024-12-09 18:18:33.938670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.917 [2024-12-09 18:18:33.938711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.917 [2024-12-09 18:18:33.950661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.917 [2024-12-09 18:18:33.950688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.178 [2024-12-09 18:18:33.965494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.178 [2024-12-09 18:18:33.965522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.178 [2024-12-09 18:18:33.975421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.178 [2024-12-09 18:18:33.975447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.178 [2024-12-09 18:18:33.989757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.178 [2024-12-09 18:18:33.989785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.178 [2024-12-09 18:18:34.009358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.178 [2024-12-09 18:18:34.009385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.178 [2024-12-09 18:18:34.019646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.178 [2024-12-09 18:18:34.019688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.178 [2024-12-09 18:18:34.033530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.178 [2024-12-09 18:18:34.033583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.178 [2024-12-09 18:18:34.043349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.178 [2024-12-09 18:18:34.043374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.178 [2024-12-09 18:18:34.058350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.178 [2024-12-09 18:18:34.058391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.178 [2024-12-09 18:18:34.076369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.178 [2024-12-09 18:18:34.076397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.178 [2024-12-09 18:18:34.087197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.178 [2024-12-09 18:18:34.087222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.178 [2024-12-09 18:18:34.102844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.178 [2024-12-09 18:18:34.102870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.178 [2024-12-09 18:18:34.118388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.178 [2024-12-09 18:18:34.118415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.178 [2024-12-09 18:18:34.128101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.178 [2024-12-09 18:18:34.128126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.178 [2024-12-09 18:18:34.139998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.178 [2024-12-09 18:18:34.140025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.178 [2024-12-09 18:18:34.151138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.178 [2024-12-09 18:18:34.151178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.178 [2024-12-09 18:18:34.164613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.178 [2024-12-09 18:18:34.164641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.178 [2024-12-09 18:18:34.174035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.178 [2024-12-09 18:18:34.174059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.178 [2024-12-09 18:18:34.186061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.178 [2024-12-09 18:18:34.186087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.178 [2024-12-09 18:18:34.201766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.178 [2024-12-09 18:18:34.201793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.178 [2024-12-09 18:18:34.211519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.178 [2024-12-09 18:18:34.211552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.437 [2024-12-09 18:18:34.225784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.437 [2024-12-09 18:18:34.225825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.437 [2024-12-09 18:18:34.235588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.437 [2024-12-09 18:18:34.235616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.437 [2024-12-09 18:18:34.249332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.437 [2024-12-09 18:18:34.249359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.437 [2024-12-09 18:18:34.259029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.437 [2024-12-09 18:18:34.259056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.437 [2024-12-09 18:18:34.272880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.437 [2024-12-09 18:18:34.272923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.438 [2024-12-09 18:18:34.283228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.438 [2024-12-09 18:18:34.283252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.438 [2024-12-09 18:18:34.298817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.438 [2024-12-09 18:18:34.298857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.438 [2024-12-09 18:18:34.315141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.438 [2024-12-09 18:18:34.315169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.438 [2024-12-09 18:18:34.329987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.438 [2024-12-09 18:18:34.330016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.438 [2024-12-09 18:18:34.340283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.438 [2024-12-09 18:18:34.340311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.438 [2024-12-09 18:18:34.352501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.438 [2024-12-09 18:18:34.352554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.438 [2024-12-09 18:18:34.363643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.438 [2024-12-09 18:18:34.363671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.438 [2024-12-09 18:18:34.376385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.438 [2024-12-09 18:18:34.376414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.438 [2024-12-09 18:18:34.386165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.438 [2024-12-09 18:18:34.386191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.438 [2024-12-09 18:18:34.398555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.438 [2024-12-09 18:18:34.398582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.438 [2024-12-09 18:18:34.412956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.438 [2024-12-09 18:18:34.412983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.438 [2024-12-09 18:18:34.422768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.438 [2024-12-09 18:18:34.422796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.438 [2024-12-09 18:18:34.437359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.438 [2024-12-09 18:18:34.437384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.438 [2024-12-09 18:18:34.446978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.438 [2024-12-09 18:18:34.447007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.438 [2024-12-09 18:18:34.460682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.438 [2024-12-09 18:18:34.460710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.438 [2024-12-09 18:18:34.469961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.438 [2024-12-09 18:18:34.469989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.697 [2024-12-09 18:18:34.481660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.697 [2024-12-09 18:18:34.481688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.697 [2024-12-09 18:18:34.499301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.697 [2024-12-09 18:18:34.499343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.697 [2024-12-09 18:18:34.509406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.697 [2024-12-09 18:18:34.509434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.697 [2024-12-09 18:18:34.521444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.697 [2024-12-09 18:18:34.521469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.697 [2024-12-09 18:18:34.532152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.697 [2024-12-09 18:18:34.532178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.697 [2024-12-09 18:18:34.543218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.697 [2024-12-09 18:18:34.543242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.697 [2024-12-09 18:18:34.553384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.697 [2024-12-09 18:18:34.553411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.697 [2024-12-09 18:18:34.565613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.697 [2024-12-09 18:18:34.565639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.697 [2024-12-09 18:18:34.581935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.697 [2024-12-09 18:18:34.581961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.697 [2024-12-09 18:18:34.591600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.697 [2024-12-09 18:18:34.591626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.697 [2024-12-09 18:18:34.603807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.697 [2024-12-09 18:18:34.603850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.697 [2024-12-09 18:18:34.614984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.697 [2024-12-09 18:18:34.615009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.697 [2024-12-09 18:18:34.627958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.697 [2024-12-09 18:18:34.627982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.697 [2024-12-09 18:18:34.637363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.697 [2024-12-09 18:18:34.637388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.697 [2024-12-09 18:18:34.649538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.697 [2024-12-09 18:18:34.649571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.697 [2024-12-09 18:18:34.660599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.697 [2024-12-09 18:18:34.660625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.697 11482.00 IOPS, 89.70 MiB/s [2024-12-09T17:18:34.738Z] [2024-12-09 18:18:34.671570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.697 [2024-12-09 18:18:34.671612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.697 [2024-12-09 18:18:34.685602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.697 [2024-12-09 18:18:34.685644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.697 [2024-12-09 18:18:34.695685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.697 [2024-12-09 18:18:34.695712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.698 [2024-12-09 18:18:34.707474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.698 [2024-12-09 18:18:34.707498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.698 [2024-12-09 18:18:34.720837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.698 [2024-12-09 18:18:34.720864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.698 [2024-12-09 18:18:34.730524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.698 [2024-12-09 18:18:34.730574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.957 [2024-12-09 18:18:34.742655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.957 [2024-12-09 18:18:34.742682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.957 [2024-12-09 18:18:34.758814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.957 [2024-12-09 18:18:34.758856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.957 [2024-12-09 18:18:34.773767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.957 [2024-12-09 18:18:34.773794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.957 [2024-12-09 18:18:34.783190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.957 [2024-12-09 18:18:34.783214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.958 [2024-12-09 18:18:34.798312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.958 [2024-12-09 18:18:34.798338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.958 [2024-12-09 18:18:34.816025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.958 [2024-12-09 18:18:34.816058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.958 [2024-12-09 18:18:34.826020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.958 [2024-12-09 18:18:34.826047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.958 [2024-12-09 18:18:34.838271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.958 [2024-12-09 18:18:34.838311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.958 [2024-12-09 18:18:34.854089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.958 [2024-12-09 18:18:34.854117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.958 [2024-12-09 18:18:34.872132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.958 [2024-12-09 18:18:34.872159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.958 [2024-12-09 18:18:34.881952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.958 [2024-12-09 18:18:34.881978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.958 [2024-12-09 18:18:34.894215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.958 [2024-12-09 18:18:34.894240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.958 [2024-12-09 18:18:34.908657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.958 [2024-12-09 18:18:34.908687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.958 [2024-12-09 18:18:34.918094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.958 [2024-12-09 18:18:34.918121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.958 [2024-12-09 18:18:34.929981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.958 [2024-12-09 18:18:34.930006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.958 [2024-12-09 18:18:34.945833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.958 [2024-12-09 18:18:34.945874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.958 [2024-12-09 18:18:34.955152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.958 [2024-12-09 18:18:34.955178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.958 [2024-12-09 18:18:34.970879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.958 [2024-12-09 18:18:34.970919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.958 [2024-12-09 18:18:34.985739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.958 [2024-12-09 18:18:34.985768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.958 [2024-12-09 18:18:34.995011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.958 [2024-12-09 18:18:34.995036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.217 [2024-12-09 18:18:35.008919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.217 [2024-12-09 18:18:35.008947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.217 [2024-12-09 18:18:35.018728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.217 [2024-12-09 18:18:35.018756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.217 [2024-12-09 18:18:35.032653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.217 [2024-12-09 18:18:35.032681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.217 [2024-12-09 18:18:35.042615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.217 [2024-12-09 18:18:35.042644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.217 [2024-12-09 18:18:35.056398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.217 [2024-12-09 18:18:35.056433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.217 [2024-12-09 18:18:35.066463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.217 [2024-12-09 18:18:35.066489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.217 [2024-12-09 18:18:35.078096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.217 [2024-12-09 18:18:35.078122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.217 [2024-12-09 18:18:35.094172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.217 [2024-12-09 18:18:35.094200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.217 [2024-12-09 18:18:35.103717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.217 [2024-12-09 18:18:35.103744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.217 [2024-12-09 18:18:35.115709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.217 [2024-12-09 18:18:35.115737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.217 [2024-12-09 18:18:35.127011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.217 [2024-12-09 18:18:35.127038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.217 [2024-12-09 18:18:35.138635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.217 [2024-12-09 18:18:35.138662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.217 [2024-12-09 18:18:35.154530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.217 [2024-12-09 18:18:35.154578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.217 [2024-12-09 18:18:35.164539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.217 [2024-12-09 18:18:35.164576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.217 [2024-12-09 18:18:35.176629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.217 [2024-12-09 18:18:35.176656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.217 [2024-12-09 18:18:35.187726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.217 [2024-12-09 18:18:35.187754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.217 [2024-12-09 18:18:35.201141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.217 [2024-12-09 18:18:35.201166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.217 [2024-12-09 18:18:35.210834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.217 [2024-12-09 18:18:35.210862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.217 [2024-12-09 18:18:35.226114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.217 [2024-12-09 18:18:35.226138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.217 [2024-12-09 18:18:35.244004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.217 [2024-12-09 18:18:35.244029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.217 [2024-12-09 18:18:35.253791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.217 [2024-12-09 18:18:35.253818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.474 [2024-12-09 18:18:35.269372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.474 [2024-12-09 18:18:35.269398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.474 [2024-12-09 18:18:35.278943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.474 [2024-12-09 18:18:35.278970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.474 [2024-12-09 18:18:35.293930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.474 [2024-12-09 18:18:35.293964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.474 [2024-12-09 18:18:35.303589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.474 [2024-12-09 18:18:35.303616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.474 [2024-12-09 18:18:35.318348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.475 [2024-12-09 18:18:35.318375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.475 [2024-12-09 18:18:35.334662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.475 [2024-12-09 18:18:35.334690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.475 [2024-12-09 18:18:35.352912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.475 [2024-12-09 18:18:35.352939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.475 [2024-12-09 18:18:35.363766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.475 [2024-12-09 18:18:35.363793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.475 [2024-12-09 18:18:35.376907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.475 [2024-12-09 18:18:35.376935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.475 [2024-12-09 18:18:35.386450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.475 [2024-12-09 18:18:35.386478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.475 [2024-12-09 18:18:35.398523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.475 [2024-12-09 18:18:35.398573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.475 [2024-12-09 18:18:35.414165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.475 [2024-12-09 18:18:35.414191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.475 [2024-12-09 18:18:35.423977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.475 [2024-12-09 18:18:35.424006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.475 [2024-12-09 18:18:35.435873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.475 [2024-12-09 18:18:35.435914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.475 [2024-12-09 18:18:35.446879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.475 [2024-12-09 18:18:35.446906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.475 [2024-12-09 18:18:35.462644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.475 [2024-12-09 18:18:35.462672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.475 [2024-12-09 18:18:35.478465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.475 [2024-12-09 18:18:35.478492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.475 [2024-12-09 18:18:35.496718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.475 [2024-12-09 18:18:35.496745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.475 [2024-12-09 18:18:35.506355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.475 [2024-12-09 18:18:35.506382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.522107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.522135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.532187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.532213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.544218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.544247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.555557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.555598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.568272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.568300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.578047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.578083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.590092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.590119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.605320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.605361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.614947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.614974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.630919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.630946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.645934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.645977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.655848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.655875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.667805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.667847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 11483.40 IOPS, 89.71 MiB/s [2024-12-09T17:18:35.773Z] [2024-12-09 18:18:35.680015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.680043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.684410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.684434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 00:29:12.732 Latency(us) 00:29:12.732 [2024-12-09T17:18:35.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.732 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:29:12.732 Nvme1n1 : 5.01 11493.50 89.79 0.00 0.00 11121.11 3009.80 19418.07 00:29:12.732 [2024-12-09T17:18:35.773Z] =================================================================================================================== 00:29:12.732 [2024-12-09T17:18:35.773Z] Total : 11493.50 89.79 0.00 0.00 11121.11 3009.80 19418.07 00:29:12.732 [2024-12-09 18:18:35.692421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.692446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.700404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.700427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.708451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.708490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.716478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.716529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.724476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.724528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.732473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.732522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.740469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.740517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.748477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.748529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.756478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.756527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.732 [2024-12-09 18:18:35.764476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.732 [2024-12-09 18:18:35.764527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.989 [2024-12-09 18:18:35.772482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.989 [2024-12-09 18:18:35.772532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.989 [2024-12-09 18:18:35.780477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.989 [2024-12-09 18:18:35.780529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.989 [2024-12-09 18:18:35.788476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.989 [2024-12-09 18:18:35.788527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.989 [2024-12-09 18:18:35.796473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.989 [2024-12-09 18:18:35.796522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.990 [2024-12-09 18:18:35.804472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.990 [2024-12-09 18:18:35.804521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.990 [2024-12-09 18:18:35.812464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.990 [2024-12-09 18:18:35.812505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.990 [2024-12-09 18:18:35.820403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.990 [2024-12-09 18:18:35.820424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.990 [2024-12-09 18:18:35.828402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.990 [2024-12-09 18:18:35.828422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.990 [2024-12-09 18:18:35.836401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.990 [2024-12-09 18:18:35.836421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.990 [2024-12-09 18:18:35.844404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.990 [2024-12-09 18:18:35.844425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.990 [2024-12-09 18:18:35.852471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.990 [2024-12-09 18:18:35.852522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.990 [2024-12-09 18:18:35.860470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.990 [2024-12-09 18:18:35.860533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.990 [2024-12-09 18:18:35.868404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.990 [2024-12-09 18:18:35.868425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.990 [2024-12-09 18:18:35.876404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.990 [2024-12-09 18:18:35.876425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.990 [2024-12-09 18:18:35.884404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.990 [2024-12-09 18:18:35.884425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1620646) - No such process 00:29:12.990 18:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1620646 00:29:12.990 18:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:12.990 18:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.990 18:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:12.990 18:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.990 18:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:12.990 18:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.990 18:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:12.990 delay0 00:29:12.990 18:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.990 18:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:29:12.990 18:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.990 18:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:12.990 18:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.990 18:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:29:12.990 [2024-12-09 18:18:36.001618] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:21.114 [2024-12-09 18:18:43.157009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1c490 is same with the state(6) to be set 00:29:21.114 Initializing NVMe Controllers 00:29:21.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:21.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:21.114 Initialization complete. Launching workers. 00:29:21.114 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 220, failed: 26002 00:29:21.114 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 26077, failed to submit 145 00:29:21.114 success 26011, unsuccessful 66, failed 0 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:21.114 rmmod nvme_tcp 00:29:21.114 rmmod nvme_fabrics 00:29:21.114 rmmod nvme_keyring 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1619331 ']' 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1619331 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1619331 ']' 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1619331 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1619331 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1619331' 00:29:21.114 killing process with pid 1619331 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1619331 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1619331 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.114 18:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:23.021 00:29:23.021 real 0m29.073s 00:29:23.021 user 0m41.147s 00:29:23.021 sys 0m10.195s 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:23.021 ************************************ 00:29:23.021 END TEST nvmf_zcopy 00:29:23.021 ************************************ 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:23.021 ************************************ 00:29:23.021 START TEST nvmf_nmic 00:29:23.021 ************************************ 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:23.021 * Looking for test storage... 00:29:23.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:29:23.021 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:23.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.022 --rc genhtml_branch_coverage=1 00:29:23.022 --rc genhtml_function_coverage=1 00:29:23.022 --rc genhtml_legend=1 00:29:23.022 --rc geninfo_all_blocks=1 00:29:23.022 --rc geninfo_unexecuted_blocks=1 00:29:23.022 00:29:23.022 ' 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:23.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.022 --rc genhtml_branch_coverage=1 00:29:23.022 --rc genhtml_function_coverage=1 00:29:23.022 --rc genhtml_legend=1 00:29:23.022 --rc geninfo_all_blocks=1 00:29:23.022 --rc geninfo_unexecuted_blocks=1 00:29:23.022 00:29:23.022 ' 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:23.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.022 --rc genhtml_branch_coverage=1 00:29:23.022 --rc genhtml_function_coverage=1 00:29:23.022 --rc genhtml_legend=1 00:29:23.022 --rc geninfo_all_blocks=1 00:29:23.022 --rc geninfo_unexecuted_blocks=1 00:29:23.022 00:29:23.022 ' 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:23.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.022 --rc genhtml_branch_coverage=1 00:29:23.022 --rc genhtml_function_coverage=1 00:29:23.022 --rc genhtml_legend=1 00:29:23.022 --rc geninfo_all_blocks=1 00:29:23.022 --rc geninfo_unexecuted_blocks=1 00:29:23.022 00:29:23.022 ' 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:29:23.022 18:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:24.929 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:24.929 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.929 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:24.929 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:24.930 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.930 18:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:25.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:29:25.190 00:29:25.190 --- 10.0.0.2 ping statistics --- 00:29:25.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.190 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:25.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:29:25.190 00:29:25.190 --- 10.0.0.1 ping statistics --- 00:29:25.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.190 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1624071 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1624071 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1624071 ']' 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:25.190 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:25.190 [2024-12-09 18:18:48.166907] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:25.190 [2024-12-09 18:18:48.168087] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:29:25.190 [2024-12-09 18:18:48.168167] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.449 [2024-12-09 18:18:48.245942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:25.449 [2024-12-09 18:18:48.313501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.449 [2024-12-09 18:18:48.313581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.449 [2024-12-09 18:18:48.313598] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.449 [2024-12-09 18:18:48.313610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.449 [2024-12-09 18:18:48.313633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.449 [2024-12-09 18:18:48.315372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.449 [2024-12-09 18:18:48.315437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:25.449 [2024-12-09 18:18:48.315467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:25.449 [2024-12-09 18:18:48.315470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.449 [2024-12-09 18:18:48.417048] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:25.449 [2024-12-09 18:18:48.417317] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:25.449 [2024-12-09 18:18:48.417643] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:25.449 [2024-12-09 18:18:48.418277] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:25.449 [2024-12-09 18:18:48.418512] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:25.449 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.449 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:29:25.449 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:25.449 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:25.449 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:25.449 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.449 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:25.449 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.449 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:25.449 [2024-12-09 18:18:48.472199] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:25.709 Malloc0 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:25.709 [2024-12-09 18:18:48.532405] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:29:25.709 test case1: single bdev can't be used in multiple subsystems 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:25.709 [2024-12-09 18:18:48.556145] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:29:25.709 [2024-12-09 18:18:48.556174] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:29:25.709 [2024-12-09 18:18:48.556211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.709 request: 00:29:25.709 { 00:29:25.709 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:29:25.709 "namespace": { 00:29:25.709 "bdev_name": "Malloc0", 00:29:25.709 "no_auto_visible": false, 00:29:25.709 "hide_metadata": false 00:29:25.709 }, 00:29:25.709 "method": "nvmf_subsystem_add_ns", 00:29:25.709 "req_id": 1 00:29:25.709 } 00:29:25.709 Got JSON-RPC error response 00:29:25.709 response: 00:29:25.709 { 00:29:25.709 "code": -32602, 00:29:25.709 "message": "Invalid parameters" 00:29:25.709 } 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:29:25.709 Adding namespace failed - expected result. 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:29:25.709 test case2: host connect to nvmf target in multiple paths 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:25.709 [2024-12-09 18:18:48.564235] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:25.709 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:29:25.969 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:29:25.969 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:29:25.969 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:29:25.969 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:29:25.969 18:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:29:28.503 18:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:29:28.503 18:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:29:28.503 18:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:29:28.503 18:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:29:28.504 18:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:29:28.504 18:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:29:28.504 18:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:29:28.504 [global] 00:29:28.504 thread=1 00:29:28.504 invalidate=1 00:29:28.504 rw=write 00:29:28.504 time_based=1 00:29:28.504 runtime=1 00:29:28.504 ioengine=libaio 00:29:28.504 direct=1 00:29:28.504 bs=4096 00:29:28.504 iodepth=1 00:29:28.504 norandommap=0 00:29:28.504 numjobs=1 00:29:28.504 00:29:28.504 verify_dump=1 00:29:28.504 verify_backlog=512 00:29:28.504 verify_state_save=0 00:29:28.504 do_verify=1 00:29:28.504 verify=crc32c-intel 00:29:28.504 [job0] 00:29:28.504 filename=/dev/nvme0n1 00:29:28.504 Could not set queue depth (nvme0n1) 00:29:28.504 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:28.504 fio-3.35 00:29:28.504 Starting 1 thread 00:29:29.439 00:29:29.439 job0: (groupid=0, jobs=1): err= 0: pid=1624554: Mon Dec 9 18:18:52 2024 00:29:29.439 read: IOPS=2298, BW=9195KiB/s (9415kB/s)(9204KiB/1001msec) 00:29:29.439 slat (nsec): min=3958, max=60378, avg=8757.95, stdev=6324.61 00:29:29.439 clat (usec): min=157, max=599, avg=240.90, stdev=88.81 00:29:29.439 lat (usec): min=161, max=614, avg=249.66, stdev=91.24 00:29:29.439 clat percentiles (usec): 00:29:29.439 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 196], 00:29:29.439 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:29:29.439 | 70.00th=[ 217], 80.00th=[ 269], 90.00th=[ 371], 95.00th=[ 486], 00:29:29.439 | 99.00th=[ 578], 99.50th=[ 594], 99.90th=[ 603], 99.95th=[ 603], 00:29:29.439 | 99.99th=[ 603] 00:29:29.439 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:29:29.439 slat (nsec): min=5275, max=47220, avg=9104.24, stdev=4241.03 00:29:29.439 clat (usec): min=120, max=323, avg=152.14, stdev=20.88 00:29:29.439 lat (usec): min=125, max=352, avg=161.25, stdev=21.85 00:29:29.439 clat percentiles (usec): 00:29:29.439 | 1.00th=[ 127], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 141], 00:29:29.439 | 30.00th=[ 143], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 149], 00:29:29.439 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 184], 95.00th=[ 190], 00:29:29.439 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 289], 99.95th=[ 293], 00:29:29.439 | 99.99th=[ 322] 00:29:29.439 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:29:29.439 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:29:29.439 lat (usec) : 250=89.28%, 500=8.74%, 750=1.97% 00:29:29.439 cpu : usr=2.40%, sys=4.50%, ctx=4861, majf=0, minf=1 00:29:29.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:29.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.439 issued rwts: total=2301,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:29.439 00:29:29.439 Run status group 0 (all jobs): 00:29:29.439 READ: bw=9195KiB/s (9415kB/s), 9195KiB/s-9195KiB/s (9415kB/s-9415kB/s), io=9204KiB (9425kB), run=1001-1001msec 00:29:29.439 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:29:29.439 00:29:29.439 Disk stats (read/write): 00:29:29.439 nvme0n1: ios=2098/2253, merge=0/0, ticks=512/338, in_queue=850, util=91.78% 00:29:29.439 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:29.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:29.699 rmmod nvme_tcp 00:29:29.699 rmmod nvme_fabrics 00:29:29.699 rmmod nvme_keyring 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1624071 ']' 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1624071 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1624071 ']' 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1624071 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1624071 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1624071' 00:29:29.699 killing process with pid 1624071 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1624071 00:29:29.699 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1624071 00:29:29.960 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:29.960 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:29.960 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:29.960 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:29:29.960 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:29:29.960 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:29.960 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:29:29.960 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:29.960 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:29.960 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.960 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.960 18:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.497 18:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:32.497 00:29:32.497 real 0m9.319s 00:29:32.497 user 0m17.125s 00:29:32.497 sys 0m3.484s 00:29:32.497 18:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:32.497 18:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:32.497 ************************************ 00:29:32.497 END TEST nvmf_nmic 00:29:32.497 ************************************ 00:29:32.497 18:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:29:32.497 18:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:32.497 18:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:32.497 18:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:32.497 ************************************ 00:29:32.497 START TEST nvmf_fio_target 00:29:32.497 ************************************ 00:29:32.497 18:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:29:32.497 * Looking for test storage... 00:29:32.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:32.497 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:32.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.498 --rc genhtml_branch_coverage=1 00:29:32.498 --rc genhtml_function_coverage=1 00:29:32.498 --rc genhtml_legend=1 00:29:32.498 --rc geninfo_all_blocks=1 00:29:32.498 --rc geninfo_unexecuted_blocks=1 00:29:32.498 00:29:32.498 ' 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:32.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.498 --rc genhtml_branch_coverage=1 00:29:32.498 --rc genhtml_function_coverage=1 00:29:32.498 --rc genhtml_legend=1 00:29:32.498 --rc geninfo_all_blocks=1 00:29:32.498 --rc geninfo_unexecuted_blocks=1 00:29:32.498 00:29:32.498 ' 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:32.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.498 --rc genhtml_branch_coverage=1 00:29:32.498 --rc genhtml_function_coverage=1 00:29:32.498 --rc genhtml_legend=1 00:29:32.498 --rc geninfo_all_blocks=1 00:29:32.498 --rc geninfo_unexecuted_blocks=1 00:29:32.498 00:29:32.498 ' 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:32.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.498 --rc genhtml_branch_coverage=1 00:29:32.498 --rc genhtml_function_coverage=1 00:29:32.498 --rc genhtml_legend=1 00:29:32.498 --rc geninfo_all_blocks=1 00:29:32.498 --rc geninfo_unexecuted_blocks=1 00:29:32.498 00:29:32.498 ' 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.498 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.499 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.499 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:32.499 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:32.499 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:29:32.499 18:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:34.409 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:34.409 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:34.409 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:34.409 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:34.409 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:34.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:34.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:29:34.410 00:29:34.410 --- 10.0.0.2 ping statistics --- 00:29:34.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.410 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:34.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:34.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:29:34.410 00:29:34.410 --- 10.0.0.1 ping statistics --- 00:29:34.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.410 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1626630 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1626630 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1626630 ']' 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.410 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:34.410 [2024-12-09 18:18:57.206644] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:34.410 [2024-12-09 18:18:57.207786] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:29:34.410 [2024-12-09 18:18:57.207857] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.410 [2024-12-09 18:18:57.283762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:34.410 [2024-12-09 18:18:57.342961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.410 [2024-12-09 18:18:57.343018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.410 [2024-12-09 18:18:57.343033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.410 [2024-12-09 18:18:57.343045] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.410 [2024-12-09 18:18:57.343055] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.410 [2024-12-09 18:18:57.344591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.410 [2024-12-09 18:18:57.344628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.410 [2024-12-09 18:18:57.344658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.410 [2024-12-09 18:18:57.344661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.410 [2024-12-09 18:18:57.430963] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:34.410 [2024-12-09 18:18:57.431198] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:34.410 [2024-12-09 18:18:57.431492] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:34.410 [2024-12-09 18:18:57.432166] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:34.410 [2024-12-09 18:18:57.432378] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:34.670 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.670 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:29:34.670 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:34.670 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:34.670 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:34.670 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.670 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:34.929 [2024-12-09 18:18:57.737458] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.929 18:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:35.188 18:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:29:35.188 18:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:35.446 18:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:29:35.446 18:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:35.704 18:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:29:35.704 18:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:35.962 18:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:29:35.962 18:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:29:36.222 18:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:36.481 18:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:29:36.481 18:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:37.051 18:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:29:37.051 18:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:37.051 18:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:29:37.051 18:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:29:37.619 18:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:37.878 18:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:29:37.878 18:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:38.136 18:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:29:38.136 18:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:38.394 18:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:38.652 [2024-12-09 18:19:01.493591] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.652 18:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:29:38.911 18:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:29:39.171 18:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:39.430 18:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:29:39.430 18:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:29:39.430 18:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:29:39.430 18:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:29:39.430 18:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:29:39.430 18:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:29:41.408 18:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:29:41.408 18:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:29:41.408 18:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:29:41.408 18:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:29:41.408 18:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:29:41.408 18:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:29:41.408 18:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:29:41.408 [global] 00:29:41.408 thread=1 00:29:41.408 invalidate=1 00:29:41.408 rw=write 00:29:41.408 time_based=1 00:29:41.408 runtime=1 00:29:41.408 ioengine=libaio 00:29:41.408 direct=1 00:29:41.408 bs=4096 00:29:41.408 iodepth=1 00:29:41.408 norandommap=0 00:29:41.408 numjobs=1 00:29:41.408 00:29:41.408 verify_dump=1 00:29:41.408 verify_backlog=512 00:29:41.408 verify_state_save=0 00:29:41.408 do_verify=1 00:29:41.408 verify=crc32c-intel 00:29:41.408 [job0] 00:29:41.408 filename=/dev/nvme0n1 00:29:41.408 [job1] 00:29:41.408 filename=/dev/nvme0n2 00:29:41.408 [job2] 00:29:41.408 filename=/dev/nvme0n3 00:29:41.408 [job3] 00:29:41.408 filename=/dev/nvme0n4 00:29:41.408 Could not set queue depth (nvme0n1) 00:29:41.408 Could not set queue depth (nvme0n2) 00:29:41.408 Could not set queue depth (nvme0n3) 00:29:41.408 Could not set queue depth (nvme0n4) 00:29:41.666 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:41.666 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:41.666 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:41.666 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:41.666 fio-3.35 00:29:41.666 Starting 4 threads 00:29:43.044 00:29:43.044 job0: (groupid=0, jobs=1): err= 0: pid=1627688: Mon Dec 9 18:19:05 2024 00:29:43.044 read: IOPS=21, BW=85.6KiB/s (87.7kB/s)(88.0KiB/1028msec) 00:29:43.044 slat (nsec): min=7530, max=36359, avg=19409.68, stdev=9356.69 00:29:43.044 clat (usec): min=40845, max=42067, avg=41347.27, stdev=489.86 00:29:43.044 lat (usec): min=40877, max=42085, avg=41366.68, stdev=489.10 00:29:43.044 clat percentiles (usec): 00:29:43.044 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:29:43.044 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:43.044 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:29:43.044 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:43.044 | 99.99th=[42206] 00:29:43.044 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:29:43.044 slat (nsec): min=7434, max=39341, avg=9128.22, stdev=2261.32 00:29:43.044 clat (usec): min=171, max=381, avg=217.66, stdev=18.23 00:29:43.044 lat (usec): min=180, max=391, avg=226.78, stdev=18.47 00:29:43.044 clat percentiles (usec): 00:29:43.044 | 1.00th=[ 180], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 206], 00:29:43.044 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 219], 00:29:43.044 | 70.00th=[ 223], 80.00th=[ 227], 90.00th=[ 237], 95.00th=[ 243], 00:29:43.044 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 383], 99.95th=[ 383], 00:29:43.044 | 99.99th=[ 383] 00:29:43.044 bw ( KiB/s): min= 4096, max= 4096, per=19.25%, avg=4096.00, stdev= 0.00, samples=1 00:29:43.044 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:43.044 lat (usec) : 250=92.70%, 500=3.18% 00:29:43.044 lat (msec) : 50=4.12% 00:29:43.044 cpu : usr=0.39%, sys=0.49%, ctx=534, majf=0, minf=2 00:29:43.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:43.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.044 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:43.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:43.044 job1: (groupid=0, jobs=1): err= 0: pid=1627689: Mon Dec 9 18:19:05 2024 00:29:43.044 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:29:43.044 slat (nsec): min=4183, max=50999, avg=9040.79, stdev=5807.09 00:29:43.044 clat (usec): min=179, max=41829, avg=380.97, stdev=2332.14 00:29:43.044 lat (usec): min=183, max=41835, avg=390.01, stdev=2332.02 00:29:43.044 clat percentiles (usec): 00:29:43.044 | 1.00th=[ 204], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 217], 00:29:43.044 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 239], 00:29:43.044 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 310], 95.00th=[ 379], 00:29:43.044 | 99.00th=[ 449], 99.50th=[ 494], 99.90th=[41157], 99.95th=[41681], 00:29:43.044 | 99.99th=[41681] 00:29:43.044 write: IOPS=1987, BW=7948KiB/s (8139kB/s)(7956KiB/1001msec); 0 zone resets 00:29:43.044 slat (usec): min=5, max=758, avg=11.67, stdev=17.56 00:29:43.044 clat (usec): min=141, max=303, avg=182.55, stdev=27.03 00:29:43.044 lat (usec): min=148, max=957, avg=194.22, stdev=32.56 00:29:43.044 clat percentiles (usec): 00:29:43.044 | 1.00th=[ 147], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 157], 00:29:43.044 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 182], 00:29:43.044 | 70.00th=[ 194], 80.00th=[ 212], 90.00th=[ 223], 95.00th=[ 231], 00:29:43.044 | 99.00th=[ 253], 99.50th=[ 269], 99.90th=[ 285], 99.95th=[ 306], 00:29:43.044 | 99.99th=[ 306] 00:29:43.044 bw ( KiB/s): min= 8192, max= 8192, per=38.50%, avg=8192.00, stdev= 0.00, samples=1 00:29:43.044 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:29:43.044 lat (usec) : 250=88.28%, 500=11.55%, 750=0.03% 00:29:43.044 lat (msec) : 50=0.14% 00:29:43.044 cpu : usr=1.30%, sys=4.30%, ctx=3529, majf=0, minf=1 00:29:43.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:43.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.044 issued rwts: total=1536,1989,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:43.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:43.044 job2: (groupid=0, jobs=1): err= 0: pid=1627690: Mon Dec 9 18:19:05 2024 00:29:43.044 read: IOPS=1220, BW=4883KiB/s (5000kB/s)(4888KiB/1001msec) 00:29:43.044 slat (nsec): min=5360, max=37342, avg=10936.08, stdev=6464.07 00:29:43.044 clat (usec): min=195, max=41084, avg=525.38, stdev=3288.11 00:29:43.044 lat (usec): min=206, max=41099, avg=536.32, stdev=3288.53 00:29:43.044 clat percentiles (usec): 00:29:43.044 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 227], 00:29:43.044 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:29:43.044 | 70.00th=[ 251], 80.00th=[ 273], 90.00th=[ 306], 95.00th=[ 412], 00:29:43.044 | 99.00th=[ 594], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:43.044 | 99.99th=[41157] 00:29:43.044 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:29:43.044 slat (nsec): min=6027, max=49994, avg=12840.52, stdev=4603.38 00:29:43.044 clat (usec): min=150, max=716, avg=205.80, stdev=52.59 00:29:43.044 lat (usec): min=156, max=723, avg=218.64, stdev=51.59 00:29:43.044 clat percentiles (usec): 00:29:43.044 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:29:43.044 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 196], 00:29:43.044 | 70.00th=[ 217], 80.00th=[ 249], 90.00th=[ 269], 95.00th=[ 289], 00:29:43.044 | 99.00th=[ 404], 99.50th=[ 433], 99.90th=[ 519], 99.95th=[ 717], 00:29:43.044 | 99.99th=[ 717] 00:29:43.044 bw ( KiB/s): min= 4096, max= 4096, per=19.25%, avg=4096.00, stdev= 0.00, samples=1 00:29:43.044 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:43.044 lat (usec) : 250=75.09%, 500=23.97%, 750=0.58%, 1000=0.07% 00:29:43.044 lat (msec) : 50=0.29% 00:29:43.044 cpu : usr=0.80%, sys=4.90%, ctx=2758, majf=0, minf=2 00:29:43.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:43.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.044 issued rwts: total=1222,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:43.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:43.045 job3: (groupid=0, jobs=1): err= 0: pid=1627691: Mon Dec 9 18:19:05 2024 00:29:43.045 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:29:43.045 slat (nsec): min=4814, max=38998, avg=11883.07, stdev=5139.02 00:29:43.045 clat (usec): min=205, max=42255, avg=654.41, stdev=3904.63 00:29:43.045 lat (usec): min=211, max=42260, avg=666.29, stdev=3904.65 00:29:43.045 clat percentiles (usec): 00:29:43.045 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 241], 00:29:43.045 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 265], 00:29:43.045 | 70.00th=[ 273], 80.00th=[ 310], 90.00th=[ 375], 95.00th=[ 396], 00:29:43.045 | 99.00th=[ 465], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:43.045 | 99.99th=[42206] 00:29:43.045 write: IOPS=1430, BW=5722KiB/s (5860kB/s)(5728KiB/1001msec); 0 zone resets 00:29:43.045 slat (nsec): min=6906, max=61604, avg=14461.78, stdev=5894.27 00:29:43.045 clat (usec): min=150, max=452, avg=200.92, stdev=33.39 00:29:43.045 lat (usec): min=159, max=460, avg=215.38, stdev=32.35 00:29:43.045 clat percentiles (usec): 00:29:43.045 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 174], 20.00th=[ 178], 00:29:43.045 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 202], 00:29:43.045 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 231], 95.00th=[ 247], 00:29:43.045 | 99.00th=[ 355], 99.50th=[ 388], 99.90th=[ 433], 99.95th=[ 453], 00:29:43.045 | 99.99th=[ 453] 00:29:43.045 bw ( KiB/s): min= 4096, max= 4096, per=19.25%, avg=4096.00, stdev= 0.00, samples=1 00:29:43.045 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:43.045 lat (usec) : 250=70.03%, 500=29.56% 00:29:43.045 lat (msec) : 20=0.04%, 50=0.37% 00:29:43.045 cpu : usr=0.90%, sys=4.10%, ctx=2457, majf=0, minf=1 00:29:43.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:43.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.045 issued rwts: total=1024,1432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:43.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:43.045 00:29:43.045 Run status group 0 (all jobs): 00:29:43.045 READ: bw=14.5MiB/s (15.2MB/s), 85.6KiB/s-6138KiB/s (87.7kB/s-6285kB/s), io=14.9MiB (15.6MB), run=1001-1028msec 00:29:43.045 WRITE: bw=20.8MiB/s (21.8MB/s), 1992KiB/s-7948KiB/s (2040kB/s-8139kB/s), io=21.4MiB (22.4MB), run=1001-1028msec 00:29:43.045 00:29:43.045 Disk stats (read/write): 00:29:43.045 nvme0n1: ios=67/512, merge=0/0, ticks=853/105, in_queue=958, util=99.00% 00:29:43.045 nvme0n2: ios=1368/1536, merge=0/0, ticks=681/262, in_queue=943, util=97.86% 00:29:43.045 nvme0n3: ios=1024/1275, merge=0/0, ticks=532/262, in_queue=794, util=88.92% 00:29:43.045 nvme0n4: ios=845/1024, merge=0/0, ticks=889/210, in_queue=1099, util=98.42% 00:29:43.045 18:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:29:43.045 [global] 00:29:43.045 thread=1 00:29:43.045 invalidate=1 00:29:43.045 rw=randwrite 00:29:43.045 time_based=1 00:29:43.045 runtime=1 00:29:43.045 ioengine=libaio 00:29:43.045 direct=1 00:29:43.045 bs=4096 00:29:43.045 iodepth=1 00:29:43.045 norandommap=0 00:29:43.045 numjobs=1 00:29:43.045 00:29:43.045 verify_dump=1 00:29:43.045 verify_backlog=512 00:29:43.045 verify_state_save=0 00:29:43.045 do_verify=1 00:29:43.045 verify=crc32c-intel 00:29:43.045 [job0] 00:29:43.045 filename=/dev/nvme0n1 00:29:43.045 [job1] 00:29:43.045 filename=/dev/nvme0n2 00:29:43.045 [job2] 00:29:43.045 filename=/dev/nvme0n3 00:29:43.045 [job3] 00:29:43.045 filename=/dev/nvme0n4 00:29:43.045 Could not set queue depth (nvme0n1) 00:29:43.045 Could not set queue depth (nvme0n2) 00:29:43.045 Could not set queue depth (nvme0n3) 00:29:43.045 Could not set queue depth (nvme0n4) 00:29:43.045 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:43.045 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:43.045 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:43.045 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:43.045 fio-3.35 00:29:43.045 Starting 4 threads 00:29:44.420 00:29:44.420 job0: (groupid=0, jobs=1): err= 0: pid=1627924: Mon Dec 9 18:19:07 2024 00:29:44.420 read: IOPS=30, BW=120KiB/s (123kB/s)(124KiB/1030msec) 00:29:44.420 slat (nsec): min=8185, max=18215, avg=13897.61, stdev=2583.89 00:29:44.421 clat (usec): min=266, max=41989, avg=29168.44, stdev=18693.10 00:29:44.421 lat (usec): min=275, max=42002, avg=29182.33, stdev=18694.12 00:29:44.421 clat percentiles (usec): 00:29:44.421 | 1.00th=[ 269], 5.00th=[ 285], 10.00th=[ 388], 20.00th=[ 519], 00:29:44.421 | 30.00th=[40109], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:29:44.421 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:44.421 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:44.421 | 99.99th=[42206] 00:29:44.421 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:29:44.421 slat (nsec): min=7940, max=42122, avg=18630.09, stdev=4299.28 00:29:44.421 clat (usec): min=158, max=1885, avg=220.80, stdev=80.37 00:29:44.421 lat (usec): min=180, max=1900, avg=239.43, stdev=80.97 00:29:44.421 clat percentiles (usec): 00:29:44.421 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:29:44.421 | 30.00th=[ 190], 40.00th=[ 206], 50.00th=[ 229], 60.00th=[ 233], 00:29:44.421 | 70.00th=[ 241], 80.00th=[ 245], 90.00th=[ 253], 95.00th=[ 260], 00:29:44.421 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 1893], 99.95th=[ 1893], 00:29:44.421 | 99.99th=[ 1893] 00:29:44.421 bw ( KiB/s): min= 4096, max= 4096, per=18.89%, avg=4096.00, stdev= 0.00, samples=1 00:29:44.421 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:44.421 lat (usec) : 250=82.32%, 500=12.89%, 750=0.55% 00:29:44.421 lat (msec) : 2=0.18%, 50=4.05% 00:29:44.421 cpu : usr=0.87%, sys=0.87%, ctx=544, majf=0, minf=1 00:29:44.421 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:44.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.421 issued rwts: total=31,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.421 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:44.421 job1: (groupid=0, jobs=1): err= 0: pid=1627925: Mon Dec 9 18:19:07 2024 00:29:44.421 read: IOPS=2298, BW=9195KiB/s (9415kB/s)(9204KiB/1001msec) 00:29:44.421 slat (nsec): min=4273, max=58887, avg=7142.01, stdev=4857.35 00:29:44.421 clat (usec): min=173, max=555, avg=223.00, stdev=56.76 00:29:44.421 lat (usec): min=181, max=574, avg=230.14, stdev=60.04 00:29:44.421 clat percentiles (usec): 00:29:44.421 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 192], 20.00th=[ 196], 00:29:44.421 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 206], 00:29:44.421 | 70.00th=[ 210], 80.00th=[ 223], 90.00th=[ 281], 95.00th=[ 375], 00:29:44.421 | 99.00th=[ 494], 99.50th=[ 510], 99.90th=[ 537], 99.95th=[ 545], 00:29:44.421 | 99.99th=[ 553] 00:29:44.421 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:29:44.421 slat (nsec): min=5946, max=41515, avg=9391.06, stdev=3977.96 00:29:44.421 clat (usec): min=135, max=438, avg=169.96, stdev=26.98 00:29:44.421 lat (usec): min=143, max=454, avg=179.35, stdev=28.83 00:29:44.421 clat percentiles (usec): 00:29:44.421 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 151], 00:29:44.421 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:29:44.421 | 70.00th=[ 174], 80.00th=[ 194], 90.00th=[ 210], 95.00th=[ 221], 00:29:44.421 | 99.00th=[ 247], 99.50th=[ 262], 99.90th=[ 404], 99.95th=[ 420], 00:29:44.421 | 99.99th=[ 441] 00:29:44.421 bw ( KiB/s): min=11640, max=11640, per=53.68%, avg=11640.00, stdev= 0.00, samples=1 00:29:44.421 iops : min= 2910, max= 2910, avg=2910.00, stdev= 0.00, samples=1 00:29:44.421 lat (usec) : 250=92.49%, 500=7.12%, 750=0.39% 00:29:44.421 cpu : usr=1.80%, sys=4.50%, ctx=4862, majf=0, minf=1 00:29:44.421 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:44.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.421 issued rwts: total=2301,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.421 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:44.421 job2: (groupid=0, jobs=1): err= 0: pid=1627926: Mon Dec 9 18:19:07 2024 00:29:44.421 read: IOPS=21, BW=84.7KiB/s (86.7kB/s)(88.0KiB/1039msec) 00:29:44.421 slat (nsec): min=10958, max=29267, avg=15842.05, stdev=3207.88 00:29:44.421 clat (usec): min=40933, max=41029, avg=40981.69, stdev=26.60 00:29:44.421 lat (usec): min=40946, max=41058, avg=40997.53, stdev=27.83 00:29:44.421 clat percentiles (usec): 00:29:44.421 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:29:44.421 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:44.421 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:44.421 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:44.421 | 99.99th=[41157] 00:29:44.421 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:29:44.421 slat (nsec): min=8204, max=47768, avg=22253.10, stdev=4550.50 00:29:44.421 clat (usec): min=191, max=375, avg=239.33, stdev=18.49 00:29:44.421 lat (usec): min=204, max=400, avg=261.58, stdev=18.79 00:29:44.421 clat percentiles (usec): 00:29:44.421 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 219], 20.00th=[ 227], 00:29:44.421 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 243], 00:29:44.421 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 269], 00:29:44.421 | 99.00th=[ 297], 99.50th=[ 302], 99.90th=[ 375], 99.95th=[ 375], 00:29:44.421 | 99.99th=[ 375] 00:29:44.421 bw ( KiB/s): min= 4096, max= 4096, per=18.89%, avg=4096.00, stdev= 0.00, samples=1 00:29:44.421 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:44.421 lat (usec) : 250=72.47%, 500=23.41% 00:29:44.421 lat (msec) : 50=4.12% 00:29:44.421 cpu : usr=0.77%, sys=1.35%, ctx=535, majf=0, minf=1 00:29:44.421 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:44.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.421 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.421 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:44.421 job3: (groupid=0, jobs=1): err= 0: pid=1627927: Mon Dec 9 18:19:07 2024 00:29:44.421 read: IOPS=1527, BW=6109KiB/s (6256kB/s)(6164KiB/1009msec) 00:29:44.421 slat (nsec): min=5715, max=32292, avg=8575.55, stdev=2845.25 00:29:44.421 clat (usec): min=212, max=41125, avg=369.55, stdev=1793.01 00:29:44.421 lat (usec): min=219, max=41136, avg=378.12, stdev=1793.51 00:29:44.421 clat percentiles (usec): 00:29:44.421 | 1.00th=[ 217], 5.00th=[ 221], 10.00th=[ 223], 20.00th=[ 227], 00:29:44.421 | 30.00th=[ 231], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 245], 00:29:44.421 | 70.00th=[ 258], 80.00th=[ 367], 90.00th=[ 494], 95.00th=[ 515], 00:29:44.421 | 99.00th=[ 611], 99.50th=[ 693], 99.90th=[41157], 99.95th=[41157], 00:29:44.421 | 99.99th=[41157] 00:29:44.421 write: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec); 0 zone resets 00:29:44.421 slat (nsec): min=6644, max=43791, avg=12388.70, stdev=6263.77 00:29:44.421 clat (usec): min=144, max=1136, avg=190.36, stdev=34.23 00:29:44.421 lat (usec): min=151, max=1145, avg=202.75, stdev=36.94 00:29:44.421 clat percentiles (usec): 00:29:44.421 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:29:44.421 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 184], 60.00th=[ 198], 00:29:44.421 | 70.00th=[ 206], 80.00th=[ 215], 90.00th=[ 227], 95.00th=[ 237], 00:29:44.421 | 99.00th=[ 262], 99.50th=[ 269], 99.90th=[ 314], 99.95th=[ 322], 00:29:44.421 | 99.99th=[ 1139] 00:29:44.421 bw ( KiB/s): min= 8192, max= 8192, per=37.78%, avg=8192.00, stdev= 0.00, samples=2 00:29:44.421 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:29:44.421 lat (usec) : 250=84.56%, 500=12.09%, 750=3.18%, 1000=0.03% 00:29:44.421 lat (msec) : 2=0.06%, 50=0.08% 00:29:44.421 cpu : usr=3.08%, sys=4.66%, ctx=3590, majf=0, minf=1 00:29:44.421 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:44.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.421 issued rwts: total=1541,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.421 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:44.421 00:29:44.421 Run status group 0 (all jobs): 00:29:44.421 READ: bw=14.6MiB/s (15.4MB/s), 84.7KiB/s-9195KiB/s (86.7kB/s-9415kB/s), io=15.2MiB (16.0MB), run=1001-1039msec 00:29:44.421 WRITE: bw=21.2MiB/s (22.2MB/s), 1971KiB/s-9.99MiB/s (2018kB/s-10.5MB/s), io=22.0MiB (23.1MB), run=1001-1039msec 00:29:44.421 00:29:44.421 Disk stats (read/write): 00:29:44.421 nvme0n1: ios=75/512, merge=0/0, ticks=725/111, in_queue=836, util=86.87% 00:29:44.421 nvme0n2: ios=2087/2250, merge=0/0, ticks=1408/375, in_queue=1783, util=97.36% 00:29:44.421 nvme0n3: ios=71/512, merge=0/0, ticks=1626/116, in_queue=1742, util=98.85% 00:29:44.421 nvme0n4: ios=1552/1536, merge=0/0, ticks=1502/272, in_queue=1774, util=98.95% 00:29:44.421 18:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:29:44.421 [global] 00:29:44.421 thread=1 00:29:44.421 invalidate=1 00:29:44.421 rw=write 00:29:44.421 time_based=1 00:29:44.421 runtime=1 00:29:44.421 ioengine=libaio 00:29:44.422 direct=1 00:29:44.422 bs=4096 00:29:44.422 iodepth=128 00:29:44.422 norandommap=0 00:29:44.422 numjobs=1 00:29:44.422 00:29:44.422 verify_dump=1 00:29:44.422 verify_backlog=512 00:29:44.422 verify_state_save=0 00:29:44.422 do_verify=1 00:29:44.422 verify=crc32c-intel 00:29:44.422 [job0] 00:29:44.422 filename=/dev/nvme0n1 00:29:44.422 [job1] 00:29:44.422 filename=/dev/nvme0n2 00:29:44.422 [job2] 00:29:44.422 filename=/dev/nvme0n3 00:29:44.422 [job3] 00:29:44.422 filename=/dev/nvme0n4 00:29:44.422 Could not set queue depth (nvme0n1) 00:29:44.422 Could not set queue depth (nvme0n2) 00:29:44.422 Could not set queue depth (nvme0n3) 00:29:44.422 Could not set queue depth (nvme0n4) 00:29:44.679 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:44.679 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:44.679 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:44.679 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:44.679 fio-3.35 00:29:44.680 Starting 4 threads 00:29:46.054 00:29:46.054 job0: (groupid=0, jobs=1): err= 0: pid=1628170: Mon Dec 9 18:19:08 2024 00:29:46.054 read: IOPS=6581, BW=25.7MiB/s (27.0MB/s)(25.9MiB/1006msec) 00:29:46.054 slat (usec): min=2, max=8248, avg=69.28, stdev=542.80 00:29:46.055 clat (usec): min=1299, max=19243, avg=9505.73, stdev=2227.95 00:29:46.055 lat (usec): min=3396, max=22066, avg=9575.01, stdev=2270.62 00:29:46.055 clat percentiles (usec): 00:29:46.055 | 1.00th=[ 6325], 5.00th=[ 7308], 10.00th=[ 7898], 20.00th=[ 8160], 00:29:46.055 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8979], 00:29:46.055 | 70.00th=[ 9372], 80.00th=[10814], 90.00th=[13042], 95.00th=[15008], 00:29:46.055 | 99.00th=[16712], 99.50th=[17171], 99.90th=[18744], 99.95th=[19268], 00:29:46.055 | 99.99th=[19268] 00:29:46.055 write: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec); 0 zone resets 00:29:46.055 slat (usec): min=3, max=7475, avg=68.56, stdev=453.80 00:29:46.055 clat (usec): min=1057, max=48787, avg=9711.05, stdev=5139.35 00:29:46.055 lat (usec): min=1079, max=49518, avg=9779.61, stdev=5170.52 00:29:46.055 clat percentiles (usec): 00:29:46.055 | 1.00th=[ 3818], 5.00th=[ 5669], 10.00th=[ 5997], 20.00th=[ 7570], 00:29:46.055 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9503], 00:29:46.055 | 70.00th=[10028], 80.00th=[10421], 90.00th=[11863], 95.00th=[12649], 00:29:46.055 | 99.00th=[43779], 99.50th=[47449], 99.90th=[48497], 99.95th=[48497], 00:29:46.055 | 99.99th=[49021] 00:29:46.055 bw ( KiB/s): min=26576, max=26672, per=49.21%, avg=26624.00, stdev=67.88, samples=2 00:29:46.055 iops : min= 6644, max= 6668, avg=6656.00, stdev=16.97, samples=2 00:29:46.055 lat (msec) : 2=0.02%, 4=0.70%, 10=72.41%, 20=25.56%, 50=1.32% 00:29:46.055 cpu : usr=10.95%, sys=12.74%, ctx=464, majf=0, minf=2 00:29:46.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:29:46.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.055 issued rwts: total=6621,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.055 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.055 job1: (groupid=0, jobs=1): err= 0: pid=1628188: Mon Dec 9 18:19:08 2024 00:29:46.055 read: IOPS=1806, BW=7226KiB/s (7400kB/s)(7284KiB/1008msec) 00:29:46.055 slat (usec): min=3, max=14569, avg=251.16, stdev=1397.83 00:29:46.055 clat (usec): min=2997, max=64670, avg=30742.27, stdev=11491.20 00:29:46.055 lat (usec): min=10940, max=64684, avg=30993.43, stdev=11598.96 00:29:46.055 clat percentiles (usec): 00:29:46.055 | 1.00th=[11076], 5.00th=[17171], 10.00th=[19530], 20.00th=[22676], 00:29:46.055 | 30.00th=[23200], 40.00th=[24773], 50.00th=[28443], 60.00th=[28967], 00:29:46.055 | 70.00th=[32900], 80.00th=[41157], 90.00th=[49546], 95.00th=[55313], 00:29:46.055 | 99.00th=[64750], 99.50th=[64750], 99.90th=[64750], 99.95th=[64750], 00:29:46.055 | 99.99th=[64750] 00:29:46.055 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:29:46.055 slat (usec): min=4, max=28405, avg=256.37, stdev=1432.06 00:29:46.055 clat (usec): min=11445, max=65928, avg=32636.47, stdev=11765.24 00:29:46.055 lat (usec): min=11453, max=65978, avg=32892.85, stdev=11918.01 00:29:46.055 clat percentiles (usec): 00:29:46.055 | 1.00th=[15401], 5.00th=[16319], 10.00th=[16581], 20.00th=[17695], 00:29:46.055 | 30.00th=[20055], 40.00th=[30802], 50.00th=[37487], 60.00th=[39060], 00:29:46.055 | 70.00th=[40633], 80.00th=[43254], 90.00th=[47449], 95.00th=[47973], 00:29:46.055 | 99.00th=[51119], 99.50th=[51643], 99.90th=[53740], 99.95th=[53740], 00:29:46.055 | 99.99th=[65799] 00:29:46.055 bw ( KiB/s): min= 6224, max=10160, per=15.14%, avg=8192.00, stdev=2783.17, samples=2 00:29:46.055 iops : min= 1556, max= 2540, avg=2048.00, stdev=695.79, samples=2 00:29:46.055 lat (msec) : 4=0.03%, 20=20.60%, 50=74.46%, 100=4.91% 00:29:46.055 cpu : usr=2.88%, sys=4.77%, ctx=170, majf=0, minf=1 00:29:46.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:29:46.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.055 issued rwts: total=1821,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.055 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.055 job2: (groupid=0, jobs=1): err= 0: pid=1628222: Mon Dec 9 18:19:08 2024 00:29:46.055 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:29:46.055 slat (usec): min=3, max=13108, avg=188.21, stdev=1081.73 00:29:46.055 clat (usec): min=14055, max=48925, avg=23060.01, stdev=5210.83 00:29:46.055 lat (usec): min=14073, max=48937, avg=23248.22, stdev=5329.75 00:29:46.055 clat percentiles (usec): 00:29:46.055 | 1.00th=[14746], 5.00th=[17171], 10.00th=[18220], 20.00th=[18744], 00:29:46.055 | 30.00th=[19268], 40.00th=[20055], 50.00th=[22414], 60.00th=[23200], 00:29:46.055 | 70.00th=[25035], 80.00th=[27132], 90.00th=[30016], 95.00th=[32113], 00:29:46.055 | 99.00th=[41157], 99.50th=[44827], 99.90th=[49021], 99.95th=[49021], 00:29:46.055 | 99.99th=[49021] 00:29:46.055 write: IOPS=2361, BW=9447KiB/s (9674kB/s)(9532KiB/1009msec); 0 zone resets 00:29:46.055 slat (usec): min=4, max=12205, avg=249.05, stdev=1204.34 00:29:46.055 clat (usec): min=2521, max=69143, avg=33648.28, stdev=13597.31 00:29:46.055 lat (usec): min=11416, max=69160, avg=33897.33, stdev=13704.87 00:29:46.055 clat percentiles (usec): 00:29:46.055 | 1.00th=[11994], 5.00th=[16712], 10.00th=[20579], 20.00th=[23462], 00:29:46.055 | 30.00th=[23987], 40.00th=[25297], 50.00th=[26084], 60.00th=[34866], 00:29:46.055 | 70.00th=[44303], 80.00th=[46924], 90.00th=[50594], 95.00th=[61604], 00:29:46.055 | 99.00th=[65799], 99.50th=[67634], 99.90th=[68682], 99.95th=[68682], 00:29:46.055 | 99.99th=[68682] 00:29:46.055 bw ( KiB/s): min= 7808, max=10232, per=16.67%, avg=9020.00, stdev=1714.03, samples=2 00:29:46.055 iops : min= 1952, max= 2558, avg=2255.00, stdev=428.51, samples=2 00:29:46.055 lat (msec) : 4=0.02%, 20=23.83%, 50=70.46%, 100=5.69% 00:29:46.055 cpu : usr=3.57%, sys=5.75%, ctx=211, majf=0, minf=1 00:29:46.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:29:46.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.055 issued rwts: total=2048,2383,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.055 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.055 job3: (groupid=0, jobs=1): err= 0: pid=1628229: Mon Dec 9 18:19:08 2024 00:29:46.055 read: IOPS=2154, BW=8619KiB/s (8826kB/s)(8688KiB/1008msec) 00:29:46.055 slat (usec): min=2, max=14477, avg=168.76, stdev=1084.55 00:29:46.055 clat (usec): min=3293, max=60936, avg=22589.52, stdev=8226.06 00:29:46.055 lat (usec): min=8049, max=60943, avg=22758.28, stdev=8269.20 00:29:46.055 clat percentiles (usec): 00:29:46.055 | 1.00th=[ 8848], 5.00th=[ 9896], 10.00th=[11600], 20.00th=[17171], 00:29:46.055 | 30.00th=[18220], 40.00th=[18744], 50.00th=[22938], 60.00th=[23987], 00:29:46.055 | 70.00th=[26084], 80.00th=[29230], 90.00th=[32900], 95.00th=[34341], 00:29:46.055 | 99.00th=[43254], 99.50th=[55837], 99.90th=[56361], 99.95th=[56361], 00:29:46.055 | 99.99th=[61080] 00:29:46.055 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:29:46.055 slat (usec): min=4, max=26035, avg=238.85, stdev=1384.43 00:29:46.055 clat (usec): min=12371, max=76714, avg=30539.52, stdev=13096.90 00:29:46.055 lat (usec): min=12391, max=76738, avg=30778.37, stdev=13226.17 00:29:46.055 clat percentiles (usec): 00:29:46.055 | 1.00th=[12518], 5.00th=[17433], 10.00th=[17957], 20.00th=[21365], 00:29:46.055 | 30.00th=[23462], 40.00th=[23725], 50.00th=[25560], 60.00th=[30278], 00:29:46.055 | 70.00th=[34341], 80.00th=[39584], 90.00th=[47449], 95.00th=[62129], 00:29:46.055 | 99.00th=[72877], 99.50th=[76022], 99.90th=[77071], 99.95th=[77071], 00:29:46.055 | 99.99th=[77071] 00:29:46.055 bw ( KiB/s): min= 8160, max=12288, per=18.90%, avg=10224.00, stdev=2918.94, samples=2 00:29:46.055 iops : min= 2040, max= 3072, avg=2556.00, stdev=729.73, samples=2 00:29:46.055 lat (msec) : 4=0.02%, 10=2.35%, 20=27.49%, 50=65.22%, 100=4.92% 00:29:46.055 cpu : usr=3.48%, sys=5.46%, ctx=178, majf=0, minf=1 00:29:46.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:29:46.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.055 issued rwts: total=2172,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.055 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.055 00:29:46.055 Run status group 0 (all jobs): 00:29:46.055 READ: bw=49.0MiB/s (51.4MB/s), 7226KiB/s-25.7MiB/s (7400kB/s-27.0MB/s), io=49.5MiB (51.9MB), run=1006-1009msec 00:29:46.055 WRITE: bw=52.8MiB/s (55.4MB/s), 8127KiB/s-25.8MiB/s (8322kB/s-27.1MB/s), io=53.3MiB (55.9MB), run=1006-1009msec 00:29:46.055 00:29:46.055 Disk stats (read/write): 00:29:46.055 nvme0n1: ios=5532/5632, merge=0/0, ticks=48741/48791, in_queue=97532, util=98.40% 00:29:46.055 nvme0n2: ios=1587/1763, merge=0/0, ticks=16451/18005, in_queue=34456, util=97.66% 00:29:46.055 nvme0n3: ios=1559/2031, merge=0/0, ticks=17773/34064, in_queue=51837, util=99.06% 00:29:46.055 nvme0n4: ios=2089/2095, merge=0/0, ticks=19955/23139, in_queue=43094, util=96.62% 00:29:46.055 18:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:29:46.055 [global] 00:29:46.055 thread=1 00:29:46.055 invalidate=1 00:29:46.055 rw=randwrite 00:29:46.055 time_based=1 00:29:46.056 runtime=1 00:29:46.056 ioengine=libaio 00:29:46.056 direct=1 00:29:46.056 bs=4096 00:29:46.056 iodepth=128 00:29:46.056 norandommap=0 00:29:46.056 numjobs=1 00:29:46.056 00:29:46.056 verify_dump=1 00:29:46.056 verify_backlog=512 00:29:46.056 verify_state_save=0 00:29:46.056 do_verify=1 00:29:46.056 verify=crc32c-intel 00:29:46.056 [job0] 00:29:46.056 filename=/dev/nvme0n1 00:29:46.056 [job1] 00:29:46.056 filename=/dev/nvme0n2 00:29:46.056 [job2] 00:29:46.056 filename=/dev/nvme0n3 00:29:46.056 [job3] 00:29:46.056 filename=/dev/nvme0n4 00:29:46.056 Could not set queue depth (nvme0n1) 00:29:46.056 Could not set queue depth (nvme0n2) 00:29:46.056 Could not set queue depth (nvme0n3) 00:29:46.056 Could not set queue depth (nvme0n4) 00:29:46.056 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:46.056 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:46.056 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:46.056 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:46.056 fio-3.35 00:29:46.056 Starting 4 threads 00:29:47.431 00:29:47.431 job0: (groupid=0, jobs=1): err= 0: pid=1628504: Mon Dec 9 18:19:10 2024 00:29:47.431 read: IOPS=4711, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1005msec) 00:29:47.431 slat (usec): min=2, max=10897, avg=97.66, stdev=582.98 00:29:47.431 clat (usec): min=1729, max=37857, avg=12086.03, stdev=3621.03 00:29:47.431 lat (usec): min=6276, max=37861, avg=12183.69, stdev=3666.27 00:29:47.431 clat percentiles (usec): 00:29:47.431 | 1.00th=[ 6849], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10159], 00:29:47.431 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11207], 60.00th=[11600], 00:29:47.431 | 70.00th=[12387], 80.00th=[13698], 90.00th=[15139], 95.00th=[17433], 00:29:47.431 | 99.00th=[30016], 99.50th=[35914], 99.90th=[38011], 99.95th=[38011], 00:29:47.431 | 99.99th=[38011] 00:29:47.431 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:29:47.431 slat (usec): min=3, max=10849, avg=97.19, stdev=572.62 00:29:47.431 clat (usec): min=6447, max=38664, avg=13634.90, stdev=5067.13 00:29:47.431 lat (usec): min=6457, max=38667, avg=13732.09, stdev=5099.04 00:29:47.431 clat percentiles (usec): 00:29:47.431 | 1.00th=[ 7767], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[10814], 00:29:47.431 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11731], 60.00th=[12518], 00:29:47.431 | 70.00th=[14353], 80.00th=[15401], 90.00th=[18744], 95.00th=[24511], 00:29:47.431 | 99.00th=[38011], 99.50th=[38011], 99.90th=[38536], 99.95th=[38536], 00:29:47.431 | 99.99th=[38536] 00:29:47.431 bw ( KiB/s): min=20264, max=20688, per=33.09%, avg=20476.00, stdev=299.81, samples=2 00:29:47.431 iops : min= 5066, max= 5172, avg=5119.00, stdev=74.95, samples=2 00:29:47.431 lat (msec) : 2=0.01%, 10=11.36%, 20=82.06%, 50=6.57% 00:29:47.431 cpu : usr=5.18%, sys=8.57%, ctx=407, majf=0, minf=1 00:29:47.431 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:29:47.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:47.431 issued rwts: total=4735,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.431 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:47.431 job1: (groupid=0, jobs=1): err= 0: pid=1628505: Mon Dec 9 18:19:10 2024 00:29:47.431 read: IOPS=3060, BW=12.0MiB/s (12.5MB/s)(12.5MiB/1045msec) 00:29:47.431 slat (usec): min=2, max=14226, avg=119.41, stdev=736.46 00:29:47.431 clat (usec): min=6445, max=64932, avg=16402.47, stdev=10368.90 00:29:47.431 lat (usec): min=6450, max=64945, avg=16521.89, stdev=10414.49 00:29:47.431 clat percentiles (usec): 00:29:47.431 | 1.00th=[ 6587], 5.00th=[10421], 10.00th=[11076], 20.00th=[11600], 00:29:47.431 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12125], 60.00th=[12518], 00:29:47.431 | 70.00th=[15270], 80.00th=[19006], 90.00th=[22938], 95.00th=[46924], 00:29:47.431 | 99.00th=[64226], 99.50th=[64750], 99.90th=[64750], 99.95th=[64750], 00:29:47.431 | 99.99th=[64750] 00:29:47.431 write: IOPS=3429, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1045msec); 0 zone resets 00:29:47.431 slat (usec): min=3, max=18082, avg=166.35, stdev=983.15 00:29:47.431 clat (usec): min=8042, max=96799, avg=22014.98, stdev=16486.02 00:29:47.431 lat (msec): min=8, max=101, avg=22.18, stdev=16.60 00:29:47.431 clat percentiles (usec): 00:29:47.431 | 1.00th=[ 8717], 5.00th=[10945], 10.00th=[11469], 20.00th=[11863], 00:29:47.431 | 30.00th=[12256], 40.00th=[12387], 50.00th=[13173], 60.00th=[17957], 00:29:47.431 | 70.00th=[22152], 80.00th=[28443], 90.00th=[44303], 95.00th=[62653], 00:29:47.431 | 99.00th=[85459], 99.50th=[87557], 99.90th=[96994], 99.95th=[96994], 00:29:47.431 | 99.99th=[96994] 00:29:47.431 bw ( KiB/s): min= 8528, max=20128, per=23.15%, avg=14328.00, stdev=8202.44, samples=2 00:29:47.431 iops : min= 2132, max= 5032, avg=3582.00, stdev=2050.61, samples=2 00:29:47.431 lat (msec) : 10=3.39%, 20=71.19%, 50=19.73%, 100=5.69% 00:29:47.431 cpu : usr=2.68%, sys=4.50%, ctx=330, majf=0, minf=1 00:29:47.431 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:29:47.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:47.431 issued rwts: total=3198,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.431 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:47.431 job2: (groupid=0, jobs=1): err= 0: pid=1628506: Mon Dec 9 18:19:10 2024 00:29:47.431 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:29:47.431 slat (usec): min=2, max=10882, avg=135.74, stdev=830.38 00:29:47.431 clat (usec): min=10929, max=28927, avg=17082.49, stdev=2920.58 00:29:47.431 lat (usec): min=10937, max=28932, avg=17218.23, stdev=2989.77 00:29:47.431 clat percentiles (usec): 00:29:47.431 | 1.00th=[11731], 5.00th=[12387], 10.00th=[13698], 20.00th=[14746], 00:29:47.431 | 30.00th=[15664], 40.00th=[16057], 50.00th=[16909], 60.00th=[17433], 00:29:47.431 | 70.00th=[17957], 80.00th=[19530], 90.00th=[21103], 95.00th=[22414], 00:29:47.431 | 99.00th=[25297], 99.50th=[26084], 99.90th=[28967], 99.95th=[28967], 00:29:47.431 | 99.99th=[28967] 00:29:47.431 write: IOPS=3399, BW=13.3MiB/s (13.9MB/s)(13.3MiB/1003msec); 0 zone resets 00:29:47.431 slat (usec): min=3, max=10815, avg=164.49, stdev=845.86 00:29:47.431 clat (usec): min=533, max=55927, avg=21702.04, stdev=10664.22 00:29:47.431 lat (usec): min=5454, max=55933, avg=21866.53, stdev=10747.65 00:29:47.431 clat percentiles (usec): 00:29:47.431 | 1.00th=[ 5997], 5.00th=[11469], 10.00th=[12387], 20.00th=[13960], 00:29:47.431 | 30.00th=[14877], 40.00th=[17171], 50.00th=[18220], 60.00th=[20317], 00:29:47.431 | 70.00th=[22414], 80.00th=[27657], 90.00th=[37487], 95.00th=[47449], 00:29:47.431 | 99.00th=[54789], 99.50th=[55313], 99.90th=[55837], 99.95th=[55837], 00:29:47.431 | 99.99th=[55837] 00:29:47.431 bw ( KiB/s): min=12288, max=13968, per=21.22%, avg=13128.00, stdev=1187.94, samples=2 00:29:47.432 iops : min= 3072, max= 3492, avg=3282.00, stdev=296.98, samples=2 00:29:47.432 lat (usec) : 750=0.02% 00:29:47.432 lat (msec) : 10=1.25%, 20=69.13%, 50=27.55%, 100=2.05% 00:29:47.432 cpu : usr=2.89%, sys=4.49%, ctx=323, majf=0, minf=1 00:29:47.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:29:47.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:47.432 issued rwts: total=3072,3410,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:47.432 job3: (groupid=0, jobs=1): err= 0: pid=1628507: Mon Dec 9 18:19:10 2024 00:29:47.432 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:29:47.432 slat (usec): min=2, max=13413, avg=141.59, stdev=951.52 00:29:47.432 clat (usec): min=2573, max=52895, avg=18401.99, stdev=10956.64 00:29:47.432 lat (usec): min=2590, max=52906, avg=18543.59, stdev=11033.95 00:29:47.432 clat percentiles (usec): 00:29:47.432 | 1.00th=[ 4883], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[11076], 00:29:47.432 | 30.00th=[11994], 40.00th=[12256], 50.00th=[13304], 60.00th=[15926], 00:29:47.432 | 70.00th=[18744], 80.00th=[25822], 90.00th=[37487], 95.00th=[44827], 00:29:47.432 | 99.00th=[51643], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:29:47.432 | 99.99th=[52691] 00:29:47.432 write: IOPS=4035, BW=15.8MiB/s (16.5MB/s)(15.8MiB/1004msec); 0 zone resets 00:29:47.432 slat (usec): min=3, max=11159, avg=114.01, stdev=833.77 00:29:47.432 clat (usec): min=1048, max=43629, avg=15087.63, stdev=8399.17 00:29:47.432 lat (usec): min=1057, max=43638, avg=15201.64, stdev=8444.00 00:29:47.432 clat percentiles (usec): 00:29:47.432 | 1.00th=[ 5080], 5.00th=[ 7046], 10.00th=[ 8160], 20.00th=[10945], 00:29:47.432 | 30.00th=[11338], 40.00th=[11994], 50.00th=[12387], 60.00th=[12649], 00:29:47.432 | 70.00th=[14484], 80.00th=[16319], 90.00th=[29754], 95.00th=[38011], 00:29:47.432 | 99.00th=[41157], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:29:47.432 | 99.99th=[43779] 00:29:47.432 bw ( KiB/s): min=10912, max=20480, per=25.37%, avg=15696.00, stdev=6765.60, samples=2 00:29:47.432 iops : min= 2728, max= 5120, avg=3924.00, stdev=1691.40, samples=2 00:29:47.432 lat (msec) : 2=0.04%, 4=0.35%, 10=12.24%, 20=68.36%, 50=18.43% 00:29:47.432 lat (msec) : 100=0.58% 00:29:47.432 cpu : usr=3.69%, sys=5.78%, ctx=205, majf=0, minf=1 00:29:47.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:47.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:47.432 issued rwts: total=3584,4052,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:47.432 00:29:47.432 Run status group 0 (all jobs): 00:29:47.432 READ: bw=54.5MiB/s (57.2MB/s), 12.0MiB/s-18.4MiB/s (12.5MB/s-19.3MB/s), io=57.0MiB (59.8MB), run=1003-1045msec 00:29:47.432 WRITE: bw=60.4MiB/s (63.4MB/s), 13.3MiB/s-19.9MiB/s (13.9MB/s-20.9MB/s), io=63.1MiB (66.2MB), run=1003-1045msec 00:29:47.432 00:29:47.432 Disk stats (read/write): 00:29:47.432 nvme0n1: ios=4139/4524, merge=0/0, ticks=19897/20486, in_queue=40383, util=99.70% 00:29:47.432 nvme0n2: ios=2767/3072, merge=0/0, ticks=14562/23863, in_queue=38425, util=97.56% 00:29:47.432 nvme0n3: ios=2598/2567, merge=0/0, ticks=22185/30558, in_queue=52743, util=96.25% 00:29:47.432 nvme0n4: ios=3275/3584, merge=0/0, ticks=36921/33772, in_queue=70693, util=100.00% 00:29:47.432 18:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:29:47.432 18:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1628639 00:29:47.432 18:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:29:47.432 18:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:29:47.432 [global] 00:29:47.432 thread=1 00:29:47.432 invalidate=1 00:29:47.432 rw=read 00:29:47.432 time_based=1 00:29:47.432 runtime=10 00:29:47.432 ioengine=libaio 00:29:47.432 direct=1 00:29:47.432 bs=4096 00:29:47.432 iodepth=1 00:29:47.432 norandommap=1 00:29:47.432 numjobs=1 00:29:47.432 00:29:47.432 [job0] 00:29:47.432 filename=/dev/nvme0n1 00:29:47.432 [job1] 00:29:47.432 filename=/dev/nvme0n2 00:29:47.432 [job2] 00:29:47.432 filename=/dev/nvme0n3 00:29:47.432 [job3] 00:29:47.432 filename=/dev/nvme0n4 00:29:47.432 Could not set queue depth (nvme0n1) 00:29:47.432 Could not set queue depth (nvme0n2) 00:29:47.432 Could not set queue depth (nvme0n3) 00:29:47.432 Could not set queue depth (nvme0n4) 00:29:47.432 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:47.432 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:47.432 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:47.432 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:47.432 fio-3.35 00:29:47.432 Starting 4 threads 00:29:50.714 18:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:29:50.714 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=21864448, buflen=4096 00:29:50.714 fio: pid=1628738, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:50.714 18:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:29:50.714 18:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:50.714 18:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:29:50.972 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=9437184, buflen=4096 00:29:50.972 fio: pid=1628737, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:51.230 18:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:51.230 18:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:29:51.230 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=21061632, buflen=4096 00:29:51.230 fio: pid=1628729, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:51.488 18:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:51.488 18:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:29:51.488 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=23449600, buflen=4096 00:29:51.488 fio: pid=1628731, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:51.488 00:29:51.488 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1628729: Mon Dec 9 18:19:14 2024 00:29:51.488 read: IOPS=1481, BW=5926KiB/s (6068kB/s)(20.1MiB/3471msec) 00:29:51.488 slat (usec): min=4, max=5880, avg=11.58, stdev=83.18 00:29:51.488 clat (usec): min=166, max=42023, avg=656.46, stdev=4064.43 00:29:51.488 lat (usec): min=179, max=47005, avg=668.04, stdev=4078.70 00:29:51.488 clat percentiles (usec): 00:29:51.488 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:29:51.488 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 233], 60.00th=[ 247], 00:29:51.488 | 70.00th=[ 258], 80.00th=[ 285], 90.00th=[ 334], 95.00th=[ 379], 00:29:51.488 | 99.00th=[ 594], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:29:51.488 | 99.99th=[42206] 00:29:51.488 bw ( KiB/s): min= 600, max=12032, per=34.75%, avg=6841.33, stdev=4972.52, samples=6 00:29:51.488 iops : min= 150, max= 3008, avg=1710.33, stdev=1243.13, samples=6 00:29:51.488 lat (usec) : 250=65.31%, 500=33.11%, 750=0.56% 00:29:51.488 lat (msec) : 50=0.99% 00:29:51.488 cpu : usr=0.69%, sys=1.67%, ctx=5147, majf=0, minf=2 00:29:51.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:51.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:51.488 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:51.488 issued rwts: total=5143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:51.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:51.488 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1628731: Mon Dec 9 18:19:14 2024 00:29:51.488 read: IOPS=1522, BW=6089KiB/s (6235kB/s)(22.4MiB/3761msec) 00:29:51.488 slat (usec): min=3, max=6838, avg=10.08, stdev=90.41 00:29:51.488 clat (usec): min=193, max=47851, avg=640.57, stdev=3966.33 00:29:51.488 lat (usec): min=198, max=47866, avg=649.46, stdev=3967.12 00:29:51.488 clat percentiles (usec): 00:29:51.488 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 221], 00:29:51.488 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 245], 00:29:51.488 | 70.00th=[ 260], 80.00th=[ 289], 90.00th=[ 359], 95.00th=[ 383], 00:29:51.488 | 99.00th=[ 644], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:29:51.488 | 99.99th=[47973] 00:29:51.488 bw ( KiB/s): min= 96, max=16064, per=33.19%, avg=6534.14, stdev=7952.90, samples=7 00:29:51.488 iops : min= 24, max= 4016, avg=1633.43, stdev=1988.33, samples=7 00:29:51.488 lat (usec) : 250=64.32%, 500=33.95%, 750=0.75%, 1000=0.02% 00:29:51.488 lat (msec) : 4=0.02%, 50=0.93% 00:29:51.488 cpu : usr=0.90%, sys=1.86%, ctx=5730, majf=0, minf=2 00:29:51.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:51.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:51.488 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:51.488 issued rwts: total=5726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:51.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:51.489 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1628737: Mon Dec 9 18:19:14 2024 00:29:51.489 read: IOPS=725, BW=2901KiB/s (2970kB/s)(9216KiB/3177msec) 00:29:51.489 slat (usec): min=4, max=11601, avg=17.16, stdev=241.50 00:29:51.489 clat (usec): min=211, max=42211, avg=1348.74, stdev=6614.38 00:29:51.489 lat (usec): min=216, max=53704, avg=1365.89, stdev=6649.43 00:29:51.489 clat percentiles (usec): 00:29:51.489 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 235], 00:29:51.489 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:29:51.489 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 318], 95.00th=[ 347], 00:29:51.489 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:51.489 | 99.99th=[42206] 00:29:51.489 bw ( KiB/s): min= 256, max= 8576, per=15.49%, avg=3049.33, stdev=3949.35, samples=6 00:29:51.489 iops : min= 64, max= 2144, avg=762.33, stdev=987.34, samples=6 00:29:51.489 lat (usec) : 250=48.59%, 500=48.72% 00:29:51.489 lat (msec) : 50=2.65% 00:29:51.489 cpu : usr=0.54%, sys=1.07%, ctx=2308, majf=0, minf=2 00:29:51.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:51.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:51.489 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:51.489 issued rwts: total=2305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:51.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:51.489 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1628738: Mon Dec 9 18:19:14 2024 00:29:51.489 read: IOPS=1856, BW=7424KiB/s (7602kB/s)(20.9MiB/2876msec) 00:29:51.489 slat (nsec): min=3592, max=68234, avg=8699.57, stdev=6676.23 00:29:51.489 clat (usec): min=211, max=42218, avg=526.32, stdev=3266.73 00:29:51.489 lat (usec): min=216, max=42222, avg=535.02, stdev=3267.30 00:29:51.489 clat percentiles (usec): 00:29:51.489 | 1.00th=[ 221], 5.00th=[ 225], 10.00th=[ 227], 20.00th=[ 233], 00:29:51.489 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 251], 00:29:51.489 | 70.00th=[ 265], 80.00th=[ 293], 90.00th=[ 359], 95.00th=[ 379], 00:29:51.489 | 99.00th=[ 498], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:29:51.489 | 99.99th=[42206] 00:29:51.489 bw ( KiB/s): min= 1024, max=13784, per=35.52%, avg=6992.00, stdev=5428.89, samples=5 00:29:51.489 iops : min= 256, max= 3446, avg=1748.00, stdev=1357.22, samples=5 00:29:51.489 lat (usec) : 250=59.94%, 500=39.07%, 750=0.32%, 1000=0.02% 00:29:51.489 lat (msec) : 50=0.64% 00:29:51.489 cpu : usr=0.35%, sys=2.12%, ctx=5341, majf=0, minf=1 00:29:51.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:51.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:51.489 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:51.489 issued rwts: total=5339,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:51.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:51.489 00:29:51.489 Run status group 0 (all jobs): 00:29:51.489 READ: bw=19.2MiB/s (20.2MB/s), 2901KiB/s-7424KiB/s (2970kB/s-7602kB/s), io=72.3MiB (75.8MB), run=2876-3761msec 00:29:51.489 00:29:51.489 Disk stats (read/write): 00:29:51.489 nvme0n1: ios=5183/0, merge=0/0, ticks=4245/0, in_queue=4245, util=99.31% 00:29:51.489 nvme0n2: ios=5764/0, merge=0/0, ticks=3617/0, in_queue=3617, util=99.46% 00:29:51.489 nvme0n3: ios=2339/0, merge=0/0, ticks=3797/0, in_queue=3797, util=98.94% 00:29:51.489 nvme0n4: ios=5333/0, merge=0/0, ticks=3808/0, in_queue=3808, util=99.22% 00:29:51.747 18:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:51.747 18:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:29:52.005 18:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:52.005 18:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:29:52.263 18:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:52.263 18:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:29:52.520 18:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:52.520 18:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:29:52.778 18:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:29:52.778 18:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1628639 00:29:52.778 18:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:29:52.778 18:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:53.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:53.037 18:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:53.037 18:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:29:53.037 18:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:29:53.037 18:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:53.037 18:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:29:53.037 18:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:53.037 18:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:29:53.037 18:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:29:53.037 18:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:29:53.037 nvmf hotplug test: fio failed as expected 00:29:53.037 18:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:53.295 rmmod nvme_tcp 00:29:53.295 rmmod nvme_fabrics 00:29:53.295 rmmod nvme_keyring 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1626630 ']' 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1626630 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1626630 ']' 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1626630 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1626630 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1626630' 00:29:53.295 killing process with pid 1626630 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1626630 00:29:53.295 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1626630 00:29:53.554 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:53.554 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:53.554 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:53.554 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:29:53.554 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:29:53.554 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:53.554 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:29:53.554 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:53.554 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:53.554 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.554 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.554 18:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.092 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:56.092 00:29:56.092 real 0m23.568s 00:29:56.092 user 1m7.651s 00:29:56.092 sys 0m9.919s 00:29:56.092 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:56.092 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:56.092 ************************************ 00:29:56.092 END TEST nvmf_fio_target 00:29:56.092 ************************************ 00:29:56.092 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:29:56.092 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:56.092 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:56.092 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:56.092 ************************************ 00:29:56.092 START TEST nvmf_bdevio 00:29:56.092 ************************************ 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:29:56.093 * Looking for test storage... 00:29:56.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:56.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.093 --rc genhtml_branch_coverage=1 00:29:56.093 --rc genhtml_function_coverage=1 00:29:56.093 --rc genhtml_legend=1 00:29:56.093 --rc geninfo_all_blocks=1 00:29:56.093 --rc geninfo_unexecuted_blocks=1 00:29:56.093 00:29:56.093 ' 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:56.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.093 --rc genhtml_branch_coverage=1 00:29:56.093 --rc genhtml_function_coverage=1 00:29:56.093 --rc genhtml_legend=1 00:29:56.093 --rc geninfo_all_blocks=1 00:29:56.093 --rc geninfo_unexecuted_blocks=1 00:29:56.093 00:29:56.093 ' 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:56.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.093 --rc genhtml_branch_coverage=1 00:29:56.093 --rc genhtml_function_coverage=1 00:29:56.093 --rc genhtml_legend=1 00:29:56.093 --rc geninfo_all_blocks=1 00:29:56.093 --rc geninfo_unexecuted_blocks=1 00:29:56.093 00:29:56.093 ' 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:56.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.093 --rc genhtml_branch_coverage=1 00:29:56.093 --rc genhtml_function_coverage=1 00:29:56.093 --rc genhtml_legend=1 00:29:56.093 --rc geninfo_all_blocks=1 00:29:56.093 --rc geninfo_unexecuted_blocks=1 00:29:56.093 00:29:56.093 ' 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.093 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:29:56.094 18:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:58.000 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:58.000 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:58.000 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:58.001 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:58.001 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:58.001 18:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:58.001 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:58.001 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:58.001 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:58.001 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:58.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:58.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:29:58.001 00:29:58.001 --- 10.0.0.2 ping statistics --- 00:29:58.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.001 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:29:58.001 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:58.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:58.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:29:58.260 00:29:58.260 --- 10.0.0.1 ping statistics --- 00:29:58.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.260 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:29:58.260 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:58.260 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:29:58.260 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:58.260 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:58.260 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:58.260 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:58.260 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:58.260 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:58.260 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:58.260 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:29:58.260 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:58.260 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:58.260 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:58.260 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1631366 00:29:58.260 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:29:58.260 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1631366 00:29:58.260 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1631366 ']' 00:29:58.260 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.260 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:58.260 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.260 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:58.260 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:58.260 [2024-12-09 18:19:21.115155] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:58.260 [2024-12-09 18:19:21.116255] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:29:58.260 [2024-12-09 18:19:21.116324] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:58.260 [2024-12-09 18:19:21.196784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:58.260 [2024-12-09 18:19:21.256027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.260 [2024-12-09 18:19:21.256080] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.260 [2024-12-09 18:19:21.256116] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:58.260 [2024-12-09 18:19:21.256127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:58.260 [2024-12-09 18:19:21.256136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.260 [2024-12-09 18:19:21.257668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:58.260 [2024-12-09 18:19:21.257734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:58.260 [2024-12-09 18:19:21.257799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:58.260 [2024-12-09 18:19:21.257802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:58.518 [2024-12-09 18:19:21.345615] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:58.518 [2024-12-09 18:19:21.345894] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:58.518 [2024-12-09 18:19:21.346136] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:58.518 [2024-12-09 18:19:21.346738] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:58.518 [2024-12-09 18:19:21.346988] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:58.518 [2024-12-09 18:19:21.398538] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:58.518 Malloc0 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:58.518 [2024-12-09 18:19:21.470761] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:58.518 { 00:29:58.518 "params": { 00:29:58.518 "name": "Nvme$subsystem", 00:29:58.518 "trtype": "$TEST_TRANSPORT", 00:29:58.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.518 "adrfam": "ipv4", 00:29:58.518 "trsvcid": "$NVMF_PORT", 00:29:58.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.518 "hdgst": ${hdgst:-false}, 00:29:58.518 "ddgst": ${ddgst:-false} 00:29:58.518 }, 00:29:58.518 "method": "bdev_nvme_attach_controller" 00:29:58.518 } 00:29:58.518 EOF 00:29:58.518 )") 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:29:58.518 18:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:58.519 "params": { 00:29:58.519 "name": "Nvme1", 00:29:58.519 "trtype": "tcp", 00:29:58.519 "traddr": "10.0.0.2", 00:29:58.519 "adrfam": "ipv4", 00:29:58.519 "trsvcid": "4420", 00:29:58.519 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:58.519 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:58.519 "hdgst": false, 00:29:58.519 "ddgst": false 00:29:58.519 }, 00:29:58.519 "method": "bdev_nvme_attach_controller" 00:29:58.519 }' 00:29:58.519 [2024-12-09 18:19:21.520916] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:29:58.519 [2024-12-09 18:19:21.520987] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1631505 ] 00:29:58.777 [2024-12-09 18:19:21.591489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:58.777 [2024-12-09 18:19:21.656422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.777 [2024-12-09 18:19:21.656475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:58.777 [2024-12-09 18:19:21.656478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.035 I/O targets: 00:29:59.035 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:29:59.035 00:29:59.035 00:29:59.035 CUnit - A unit testing framework for C - Version 2.1-3 00:29:59.035 http://cunit.sourceforge.net/ 00:29:59.035 00:29:59.035 00:29:59.035 Suite: bdevio tests on: Nvme1n1 00:29:59.035 Test: blockdev write read block ...passed 00:29:59.292 Test: blockdev write zeroes read block ...passed 00:29:59.292 Test: blockdev write zeroes read no split ...passed 00:29:59.292 Test: blockdev write zeroes read split ...passed 00:29:59.292 Test: blockdev write zeroes read split partial ...passed 00:29:59.292 Test: blockdev reset ...[2024-12-09 18:19:22.147752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:59.292 [2024-12-09 18:19:22.147886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14478c0 (9): Bad file descriptor 00:29:59.292 [2024-12-09 18:19:22.192944] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:29:59.292 passed 00:29:59.292 Test: blockdev write read 8 blocks ...passed 00:29:59.292 Test: blockdev write read size > 128k ...passed 00:29:59.292 Test: blockdev write read invalid size ...passed 00:29:59.292 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:59.292 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:59.292 Test: blockdev write read max offset ...passed 00:29:59.292 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:59.550 Test: blockdev writev readv 8 blocks ...passed 00:29:59.550 Test: blockdev writev readv 30 x 1block ...passed 00:29:59.550 Test: blockdev writev readv block ...passed 00:29:59.550 Test: blockdev writev readv size > 128k ...passed 00:29:59.550 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:59.550 Test: blockdev comparev and writev ...[2024-12-09 18:19:22.448753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:59.550 [2024-12-09 18:19:22.448791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.550 [2024-12-09 18:19:22.448815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:59.550 [2024-12-09 18:19:22.448832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:59.550 [2024-12-09 18:19:22.449214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:59.550 [2024-12-09 18:19:22.449238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:59.550 [2024-12-09 18:19:22.449260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:59.550 [2024-12-09 18:19:22.449276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:59.550 [2024-12-09 18:19:22.449678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:59.550 [2024-12-09 18:19:22.449701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:59.550 [2024-12-09 18:19:22.449722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:59.550 [2024-12-09 18:19:22.449739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:59.550 [2024-12-09 18:19:22.450097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:59.550 [2024-12-09 18:19:22.450120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:59.550 [2024-12-09 18:19:22.450141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:59.550 [2024-12-09 18:19:22.450157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:59.550 passed 00:29:59.550 Test: blockdev nvme passthru rw ...passed 00:29:59.550 Test: blockdev nvme passthru vendor specific ...[2024-12-09 18:19:22.532809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:59.550 [2024-12-09 18:19:22.532837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:59.550 [2024-12-09 18:19:22.532983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:59.550 [2024-12-09 18:19:22.533014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:59.550 [2024-12-09 18:19:22.533153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:59.550 [2024-12-09 18:19:22.533175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:59.550 [2024-12-09 18:19:22.533318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:59.550 [2024-12-09 18:19:22.533340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:59.550 passed 00:29:59.550 Test: blockdev nvme admin passthru ...passed 00:29:59.809 Test: blockdev copy ...passed 00:29:59.809 00:29:59.809 Run Summary: Type Total Ran Passed Failed Inactive 00:29:59.809 suites 1 1 n/a 0 0 00:29:59.809 tests 23 23 23 0 0 00:29:59.809 asserts 152 152 152 0 n/a 00:29:59.809 00:29:59.809 Elapsed time = 1.178 seconds 00:29:59.809 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:59.809 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.809 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:59.809 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.809 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:29:59.809 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:29:59.809 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:59.809 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:29:59.809 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:59.809 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:29:59.809 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:59.809 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:59.809 rmmod nvme_tcp 00:29:59.809 rmmod nvme_fabrics 00:29:59.809 rmmod nvme_keyring 00:29:59.809 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:59.809 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:29:59.809 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:29:59.809 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1631366 ']' 00:29:59.809 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1631366 00:29:59.809 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1631366 ']' 00:29:59.809 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1631366 00:29:59.809 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:29:59.809 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:59.809 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1631366 00:30:00.067 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:30:00.068 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:30:00.068 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1631366' 00:30:00.068 killing process with pid 1631366 00:30:00.068 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1631366 00:30:00.068 18:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1631366 00:30:00.068 18:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:00.068 18:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:00.068 18:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:00.068 18:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:30:00.068 18:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:30:00.068 18:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:00.068 18:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:30:00.068 18:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:00.068 18:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:00.068 18:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.326 18:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.326 18:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.230 18:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:02.230 00:30:02.230 real 0m6.549s 00:30:02.230 user 0m9.181s 00:30:02.230 sys 0m2.512s 00:30:02.230 18:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:02.230 18:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:02.230 ************************************ 00:30:02.230 END TEST nvmf_bdevio 00:30:02.230 ************************************ 00:30:02.230 18:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:30:02.230 00:30:02.230 real 3m55.683s 00:30:02.230 user 8m56.296s 00:30:02.230 sys 1m24.227s 00:30:02.230 18:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:02.230 18:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:02.230 ************************************ 00:30:02.230 END TEST nvmf_target_core_interrupt_mode 00:30:02.230 ************************************ 00:30:02.230 18:19:25 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:02.230 18:19:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:02.230 18:19:25 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:02.230 18:19:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:02.230 ************************************ 00:30:02.230 START TEST nvmf_interrupt 00:30:02.230 ************************************ 00:30:02.230 18:19:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:02.230 * Looking for test storage... 00:30:02.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:02.489 18:19:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:02.489 18:19:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:30:02.489 18:19:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:02.489 18:19:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:02.489 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:02.489 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:02.489 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:02.489 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:30:02.489 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:30:02.489 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:30:02.489 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:30:02.489 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:30:02.489 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:30:02.489 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:30:02.489 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:02.489 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:02.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.490 --rc genhtml_branch_coverage=1 00:30:02.490 --rc genhtml_function_coverage=1 00:30:02.490 --rc genhtml_legend=1 00:30:02.490 --rc geninfo_all_blocks=1 00:30:02.490 --rc geninfo_unexecuted_blocks=1 00:30:02.490 00:30:02.490 ' 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:02.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.490 --rc genhtml_branch_coverage=1 00:30:02.490 --rc genhtml_function_coverage=1 00:30:02.490 --rc genhtml_legend=1 00:30:02.490 --rc geninfo_all_blocks=1 00:30:02.490 --rc geninfo_unexecuted_blocks=1 00:30:02.490 00:30:02.490 ' 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:02.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.490 --rc genhtml_branch_coverage=1 00:30:02.490 --rc genhtml_function_coverage=1 00:30:02.490 --rc genhtml_legend=1 00:30:02.490 --rc geninfo_all_blocks=1 00:30:02.490 --rc geninfo_unexecuted_blocks=1 00:30:02.490 00:30:02.490 ' 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:02.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.490 --rc genhtml_branch_coverage=1 00:30:02.490 --rc genhtml_function_coverage=1 00:30:02.490 --rc genhtml_legend=1 00:30:02.490 --rc geninfo_all_blocks=1 00:30:02.490 --rc geninfo_unexecuted_blocks=1 00:30:02.490 00:30:02.490 ' 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:30:02.490 18:19:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:05.021 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:05.021 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:30:05.021 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:05.021 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:05.021 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:05.021 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:05.021 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:05.021 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:30:05.021 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:05.021 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:30:05.021 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:30:05.021 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:30:05.021 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:30:05.021 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:30:05.021 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:30:05.021 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.021 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.021 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:05.022 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:05.022 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:05.022 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:05.022 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:05.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:30:05.022 00:30:05.022 --- 10.0.0.2 ping statistics --- 00:30:05.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.022 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:30:05.022 00:30:05.022 --- 10.0.0.1 ping statistics --- 00:30:05.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.022 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1633591 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1633591 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1633591 ']' 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:05.022 18:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.023 18:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:05.023 18:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:05.023 [2024-12-09 18:19:27.690974] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:05.023 [2024-12-09 18:19:27.692024] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:30:05.023 [2024-12-09 18:19:27.692094] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.023 [2024-12-09 18:19:27.765958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:05.023 [2024-12-09 18:19:27.823084] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.023 [2024-12-09 18:19:27.823162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.023 [2024-12-09 18:19:27.823176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.023 [2024-12-09 18:19:27.823188] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.023 [2024-12-09 18:19:27.823197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.023 [2024-12-09 18:19:27.828567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.023 [2024-12-09 18:19:27.828579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.023 [2024-12-09 18:19:27.926069] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:05.023 [2024-12-09 18:19:27.926081] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:05.023 [2024-12-09 18:19:27.926332] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:05.023 18:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:05.023 18:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:30:05.023 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:05.023 18:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:05.023 18:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:05.023 18:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.023 18:19:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:30:05.023 18:19:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:30:05.023 18:19:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:30:05.023 18:19:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:30:05.023 5000+0 records in 00:30:05.023 5000+0 records out 00:30:05.023 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0149104 s, 687 MB/s 00:30:05.023 18:19:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:30:05.023 18:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.023 18:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:05.023 AIO0 00:30:05.023 18:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.023 18:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:30:05.023 18:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.023 18:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:05.023 [2024-12-09 18:19:28.049218] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.023 18:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.023 18:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:05.023 18:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.023 18:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:05.282 [2024-12-09 18:19:28.073456] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1633591 0 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1633591 0 idle 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1633591 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1633591 -w 256 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1633591 root 20 0 128.2g 48384 35328 S 0.0 0.1 0:00.28 reactor_0' 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1633591 root 20 0 128.2g 48384 35328 S 0.0 0.1 0:00.28 reactor_0 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1633591 1 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1633591 1 idle 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1633591 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1633591 -w 256 00:30:05.282 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1633597 root 20 0 128.2g 48384 35328 S 0.0 0.1 0:00.00 reactor_1' 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1633597 root 20 0 128.2g 48384 35328 S 0.0 0.1 0:00.00 reactor_1 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1633754 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1633591 0 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1633591 0 busy 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1633591 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1633591 -w 256 00:30:05.541 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:05.800 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1633591 root 20 0 128.2g 49152 35328 R 26.7 0.1 0:00.32 reactor_0' 00:30:05.800 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1633591 root 20 0 128.2g 49152 35328 R 26.7 0.1 0:00.32 reactor_0 00:30:05.800 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:05.800 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:05.800 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=26.7 00:30:05.800 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=26 00:30:05.800 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:05.800 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:05.800 18:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1633591 -w 256 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1633591 root 20 0 128.2g 49152 35328 R 99.9 0.1 0:02.64 reactor_0' 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1633591 root 20 0 128.2g 49152 35328 R 99.9 0.1 0:02.64 reactor_0 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1633591 1 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1633591 1 busy 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1633591 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1633591 -w 256 00:30:06.735 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:06.993 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1633597 root 20 0 128.2g 49152 35328 R 99.9 0.1 0:01.35 reactor_1' 00:30:06.993 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1633597 root 20 0 128.2g 49152 35328 R 99.9 0.1 0:01.35 reactor_1 00:30:06.993 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:06.993 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:06.993 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:30:06.993 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:30:06.993 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:06.993 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:06.993 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:30:06.994 18:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:06.994 18:19:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1633754 00:30:17.061 Initializing NVMe Controllers 00:30:17.061 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:17.061 Controller IO queue size 256, less than required. 00:30:17.061 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:17.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:17.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:17.061 Initialization complete. Launching workers. 00:30:17.061 ======================================================== 00:30:17.061 Latency(us) 00:30:17.061 Device Information : IOPS MiB/s Average min max 00:30:17.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13839.19 54.06 18510.48 4125.92 26151.84 00:30:17.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13747.49 53.70 18633.68 4549.32 58757.53 00:30:17.061 ======================================================== 00:30:17.061 Total : 27586.68 107.76 18571.87 4125.92 58757.53 00:30:17.061 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1633591 0 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1633591 0 idle 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1633591 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1633591 -w 256 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1633591 root 20 0 128.2g 49152 35328 S 0.0 0.1 0:20.22 reactor_0' 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1633591 root 20 0 128.2g 49152 35328 S 0.0 0.1 0:20.22 reactor_0 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:17.061 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1633591 1 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1633591 1 idle 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1633591 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1633591 -w 256 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1633597 root 20 0 128.2g 49152 35328 S 0.0 0.1 0:09.98 reactor_1' 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1633597 root 20 0 128.2g 49152 35328 S 0.0 0.1 0:09.98 reactor_1 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:17.062 18:19:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:17.062 18:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:17.062 18:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:17.062 18:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:17.062 18:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:17.062 18:19:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:17.062 18:19:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:30:17.062 18:19:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:30:17.062 18:19:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:17.062 18:19:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:30:17.062 18:19:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1633591 0 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1633591 0 idle 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1633591 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1633591 -w 256 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1633591 root 20 0 128.2g 61440 35328 S 6.7 0.1 0:20.32 reactor_0' 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1633591 root 20 0 128.2g 61440 35328 S 6.7 0.1 0:20.32 reactor_0 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1633591 1 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1633591 1 idle 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1633591 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1633591 -w 256 00:30:18.437 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1633597 root 20 0 128.2g 61440 35328 S 0.0 0.1 0:10.00 reactor_1' 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1633597 root 20 0 128.2g 61440 35328 S 0.0 0.1 0:10.00 reactor_1 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:18.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:18.695 18:19:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:18.695 rmmod nvme_tcp 00:30:18.695 rmmod nvme_fabrics 00:30:18.954 rmmod nvme_keyring 00:30:18.954 18:19:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:18.954 18:19:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:30:18.954 18:19:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:30:18.954 18:19:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1633591 ']' 00:30:18.954 18:19:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1633591 00:30:18.954 18:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1633591 ']' 00:30:18.954 18:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1633591 00:30:18.954 18:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:30:18.954 18:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:18.954 18:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1633591 00:30:18.954 18:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:18.954 18:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:18.954 18:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1633591' 00:30:18.954 killing process with pid 1633591 00:30:18.954 18:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1633591 00:30:18.954 18:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1633591 00:30:19.213 18:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:19.213 18:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:19.213 18:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:19.213 18:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:30:19.213 18:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:30:19.213 18:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:19.213 18:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:30:19.213 18:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:19.213 18:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:19.213 18:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:19.213 18:19:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:19.213 18:19:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.117 18:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:21.117 00:30:21.117 real 0m18.846s 00:30:21.117 user 0m37.134s 00:30:21.117 sys 0m6.513s 00:30:21.117 18:19:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:21.117 18:19:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:21.117 ************************************ 00:30:21.117 END TEST nvmf_interrupt 00:30:21.117 ************************************ 00:30:21.117 00:30:21.117 real 25m2.109s 00:30:21.117 user 58m26.498s 00:30:21.117 sys 6m37.643s 00:30:21.117 18:19:44 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:21.117 18:19:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:21.117 ************************************ 00:30:21.117 END TEST nvmf_tcp 00:30:21.117 ************************************ 00:30:21.117 18:19:44 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:30:21.117 18:19:44 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:21.117 18:19:44 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:21.117 18:19:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:21.117 18:19:44 -- common/autotest_common.sh@10 -- # set +x 00:30:21.117 ************************************ 00:30:21.117 START TEST spdkcli_nvmf_tcp 00:30:21.117 ************************************ 00:30:21.117 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:21.375 * Looking for test storage... 00:30:21.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:21.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.375 --rc genhtml_branch_coverage=1 00:30:21.375 --rc genhtml_function_coverage=1 00:30:21.375 --rc genhtml_legend=1 00:30:21.375 --rc geninfo_all_blocks=1 00:30:21.375 --rc geninfo_unexecuted_blocks=1 00:30:21.375 00:30:21.375 ' 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:21.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.375 --rc genhtml_branch_coverage=1 00:30:21.375 --rc genhtml_function_coverage=1 00:30:21.375 --rc genhtml_legend=1 00:30:21.375 --rc geninfo_all_blocks=1 00:30:21.375 --rc geninfo_unexecuted_blocks=1 00:30:21.375 00:30:21.375 ' 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:21.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.375 --rc genhtml_branch_coverage=1 00:30:21.375 --rc genhtml_function_coverage=1 00:30:21.375 --rc genhtml_legend=1 00:30:21.375 --rc geninfo_all_blocks=1 00:30:21.375 --rc geninfo_unexecuted_blocks=1 00:30:21.375 00:30:21.375 ' 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:21.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.375 --rc genhtml_branch_coverage=1 00:30:21.375 --rc genhtml_function_coverage=1 00:30:21.375 --rc genhtml_legend=1 00:30:21.375 --rc geninfo_all_blocks=1 00:30:21.375 --rc geninfo_unexecuted_blocks=1 00:30:21.375 00:30:21.375 ' 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:21.375 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:21.376 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1635769 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1635769 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1635769 ']' 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:21.376 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:21.376 [2024-12-09 18:19:44.380790] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:30:21.376 [2024-12-09 18:19:44.380884] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1635769 ] 00:30:21.634 [2024-12-09 18:19:44.447008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:21.634 [2024-12-09 18:19:44.502094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.634 [2024-12-09 18:19:44.502099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.634 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:21.634 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:30:21.634 18:19:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:21.634 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:21.634 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:21.634 18:19:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:21.634 18:19:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:21.634 18:19:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:21.634 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:21.634 18:19:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:21.634 18:19:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:21.634 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:21.634 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:21.634 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:21.634 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:21.634 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:21.634 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:21.634 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:21.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:21.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:21.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:21.634 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:21.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:21.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:21.634 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:21.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:21.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:21.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:21.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:21.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:21.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:21.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:21.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:21.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:21.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:21.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:21.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:21.634 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:21.634 ' 00:30:24.912 [2024-12-09 18:19:47.275034] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:25.844 [2024-12-09 18:19:48.543400] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:28.369 [2024-12-09 18:19:50.890738] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:30.263 [2024-12-09 18:19:52.904790] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:31.634 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:31.634 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:31.634 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:31.634 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:31.634 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:31.634 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:31.634 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:31.634 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:31.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:31.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:31.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:31.634 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:31.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:31.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:31.634 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:31.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:31.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:31.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:31.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:31.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:31.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:31.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:31.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:31.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:31.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:31.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:31.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:31.634 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:31.634 18:19:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:31.634 18:19:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:31.634 18:19:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:31.634 18:19:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:31.634 18:19:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:31.634 18:19:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:31.634 18:19:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:31.634 18:19:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:32.201 18:19:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:32.201 18:19:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:32.201 18:19:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:32.201 18:19:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:32.201 18:19:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:32.201 18:19:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:32.201 18:19:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:32.201 18:19:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:32.201 18:19:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:32.201 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:32.201 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:32.201 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:32.201 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:32.201 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:32.201 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:32.201 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:32.201 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:32.201 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:32.201 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:32.201 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:32.201 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:32.201 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:32.201 ' 00:30:37.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:37.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:37.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:37.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:37.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:37.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:37.462 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:37.462 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:37.462 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:37.462 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:37.462 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:37.463 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:37.463 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:37.463 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:37.720 18:20:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:37.720 18:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:37.720 18:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:37.720 18:20:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1635769 00:30:37.720 18:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1635769 ']' 00:30:37.720 18:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1635769 00:30:37.720 18:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:30:37.720 18:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:37.720 18:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1635769 00:30:37.720 18:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:37.720 18:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:37.721 18:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1635769' 00:30:37.721 killing process with pid 1635769 00:30:37.721 18:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1635769 00:30:37.721 18:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1635769 00:30:37.979 18:20:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:37.979 18:20:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:37.980 18:20:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1635769 ']' 00:30:37.980 18:20:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1635769 00:30:37.980 18:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1635769 ']' 00:30:37.980 18:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1635769 00:30:37.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1635769) - No such process 00:30:37.980 18:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1635769 is not found' 00:30:37.980 Process with pid 1635769 is not found 00:30:37.980 18:20:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:37.980 18:20:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:37.980 18:20:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:37.980 00:30:37.980 real 0m16.632s 00:30:37.980 user 0m35.386s 00:30:37.980 sys 0m0.776s 00:30:37.980 18:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:37.980 18:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:37.980 ************************************ 00:30:37.980 END TEST spdkcli_nvmf_tcp 00:30:37.980 ************************************ 00:30:37.980 18:20:00 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:37.980 18:20:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:37.980 18:20:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:37.980 18:20:00 -- common/autotest_common.sh@10 -- # set +x 00:30:37.980 ************************************ 00:30:37.980 START TEST nvmf_identify_passthru 00:30:37.980 ************************************ 00:30:37.980 18:20:00 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:37.980 * Looking for test storage... 00:30:37.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:37.980 18:20:00 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:37.980 18:20:00 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:30:37.980 18:20:00 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:37.980 18:20:00 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:30:37.980 18:20:00 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:37.980 18:20:00 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:37.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.980 --rc genhtml_branch_coverage=1 00:30:37.980 --rc genhtml_function_coverage=1 00:30:37.980 --rc genhtml_legend=1 00:30:37.980 --rc geninfo_all_blocks=1 00:30:37.980 --rc geninfo_unexecuted_blocks=1 00:30:37.980 00:30:37.980 ' 00:30:37.980 18:20:00 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:37.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.980 --rc genhtml_branch_coverage=1 00:30:37.980 --rc genhtml_function_coverage=1 00:30:37.980 --rc genhtml_legend=1 00:30:37.980 --rc geninfo_all_blocks=1 00:30:37.980 --rc geninfo_unexecuted_blocks=1 00:30:37.980 00:30:37.980 ' 00:30:37.980 18:20:00 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:37.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.980 --rc genhtml_branch_coverage=1 00:30:37.980 --rc genhtml_function_coverage=1 00:30:37.980 --rc genhtml_legend=1 00:30:37.980 --rc geninfo_all_blocks=1 00:30:37.980 --rc geninfo_unexecuted_blocks=1 00:30:37.980 00:30:37.980 ' 00:30:37.980 18:20:00 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:37.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.980 --rc genhtml_branch_coverage=1 00:30:37.980 --rc genhtml_function_coverage=1 00:30:37.980 --rc genhtml_legend=1 00:30:37.980 --rc geninfo_all_blocks=1 00:30:37.980 --rc geninfo_unexecuted_blocks=1 00:30:37.980 00:30:37.980 ' 00:30:37.980 18:20:00 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:37.980 18:20:00 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:37.980 18:20:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.980 18:20:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.980 18:20:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.980 18:20:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:37.980 18:20:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:37.980 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:37.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:37.981 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:37.981 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:37.981 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:37.981 18:20:00 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:37.981 18:20:00 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:30:37.981 18:20:00 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:37.981 18:20:00 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:37.981 18:20:00 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:37.981 18:20:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.981 18:20:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.981 18:20:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.981 18:20:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:37.981 18:20:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.981 18:20:00 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:37.981 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:37.981 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:37.981 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:37.981 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:37.981 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:37.981 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.981 18:20:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:37.981 18:20:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.981 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:37.981 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:37.981 18:20:00 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:30:37.981 18:20:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:40.514 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:40.514 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.514 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:40.514 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:40.515 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:40.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:40.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:30:40.515 00:30:40.515 --- 10.0.0.2 ping statistics --- 00:30:40.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.515 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:40.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:40.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:30:40.515 00:30:40.515 --- 10.0.0.1 ping statistics --- 00:30:40.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.515 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:40.515 18:20:03 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:40.515 18:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:40.515 18:20:03 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:40.515 18:20:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:40.515 18:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:40.515 18:20:03 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:40.515 18:20:03 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:30:40.515 18:20:03 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:30:40.515 18:20:03 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:30:40.515 18:20:03 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:40.515 18:20:03 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:30:40.515 18:20:03 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:40.515 18:20:03 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:40.515 18:20:03 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:40.515 18:20:03 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:40.515 18:20:03 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:30:40.515 18:20:03 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:30:40.515 18:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:30:40.515 18:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:30:40.515 18:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:30:40.515 18:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:40.515 18:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:44.700 18:20:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:30:44.700 18:20:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:30:44.700 18:20:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:44.700 18:20:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:48.887 18:20:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:48.887 18:20:11 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:48.887 18:20:11 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:48.887 18:20:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:48.887 18:20:11 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:48.887 18:20:11 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:48.887 18:20:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:48.887 18:20:11 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1640401 00:30:48.887 18:20:11 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:48.887 18:20:11 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:48.887 18:20:11 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1640401 00:30:48.887 18:20:11 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1640401 ']' 00:30:48.887 18:20:11 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.887 18:20:11 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:48.887 18:20:11 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.887 18:20:11 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:48.887 18:20:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:48.887 [2024-12-09 18:20:11.809480] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:30:48.887 [2024-12-09 18:20:11.809592] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:48.887 [2024-12-09 18:20:11.884142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:49.145 [2024-12-09 18:20:11.946877] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:49.145 [2024-12-09 18:20:11.946932] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:49.145 [2024-12-09 18:20:11.946960] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:49.145 [2024-12-09 18:20:11.946971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:49.145 [2024-12-09 18:20:11.946981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:49.145 [2024-12-09 18:20:11.948629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.145 [2024-12-09 18:20:11.948664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:49.145 [2024-12-09 18:20:11.948723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:49.145 [2024-12-09 18:20:11.948726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.145 18:20:12 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:49.145 18:20:12 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:30:49.145 18:20:12 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:49.145 18:20:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.145 18:20:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:49.145 INFO: Log level set to 20 00:30:49.145 INFO: Requests: 00:30:49.145 { 00:30:49.145 "jsonrpc": "2.0", 00:30:49.145 "method": "nvmf_set_config", 00:30:49.145 "id": 1, 00:30:49.145 "params": { 00:30:49.145 "admin_cmd_passthru": { 00:30:49.145 "identify_ctrlr": true 00:30:49.145 } 00:30:49.145 } 00:30:49.145 } 00:30:49.145 00:30:49.145 INFO: response: 00:30:49.145 { 00:30:49.145 "jsonrpc": "2.0", 00:30:49.145 "id": 1, 00:30:49.145 "result": true 00:30:49.145 } 00:30:49.145 00:30:49.145 18:20:12 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.145 18:20:12 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:49.145 18:20:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.145 18:20:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:49.145 INFO: Setting log level to 20 00:30:49.145 INFO: Setting log level to 20 00:30:49.145 INFO: Log level set to 20 00:30:49.145 INFO: Log level set to 20 00:30:49.145 INFO: Requests: 00:30:49.145 { 00:30:49.145 "jsonrpc": "2.0", 00:30:49.145 "method": "framework_start_init", 00:30:49.145 "id": 1 00:30:49.145 } 00:30:49.145 00:30:49.145 INFO: Requests: 00:30:49.145 { 00:30:49.145 "jsonrpc": "2.0", 00:30:49.145 "method": "framework_start_init", 00:30:49.145 "id": 1 00:30:49.145 } 00:30:49.145 00:30:49.403 [2024-12-09 18:20:12.191803] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:49.403 INFO: response: 00:30:49.403 { 00:30:49.403 "jsonrpc": "2.0", 00:30:49.403 "id": 1, 00:30:49.403 "result": true 00:30:49.403 } 00:30:49.403 00:30:49.403 INFO: response: 00:30:49.403 { 00:30:49.403 "jsonrpc": "2.0", 00:30:49.403 "id": 1, 00:30:49.403 "result": true 00:30:49.403 } 00:30:49.403 00:30:49.403 18:20:12 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.403 18:20:12 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:49.403 18:20:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.403 18:20:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:49.403 INFO: Setting log level to 40 00:30:49.403 INFO: Setting log level to 40 00:30:49.403 INFO: Setting log level to 40 00:30:49.403 [2024-12-09 18:20:12.201989] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.403 18:20:12 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.403 18:20:12 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:49.403 18:20:12 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:49.403 18:20:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:49.403 18:20:12 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:30:49.403 18:20:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.403 18:20:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:52.681 Nvme0n1 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.681 18:20:15 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.681 18:20:15 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.681 18:20:15 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:52.681 [2024-12-09 18:20:15.122477] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.681 18:20:15 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:52.681 [ 00:30:52.681 { 00:30:52.681 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:52.681 "subtype": "Discovery", 00:30:52.681 "listen_addresses": [], 00:30:52.681 "allow_any_host": true, 00:30:52.681 "hosts": [] 00:30:52.681 }, 00:30:52.681 { 00:30:52.681 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:52.681 "subtype": "NVMe", 00:30:52.681 "listen_addresses": [ 00:30:52.681 { 00:30:52.681 "trtype": "TCP", 00:30:52.681 "adrfam": "IPv4", 00:30:52.681 "traddr": "10.0.0.2", 00:30:52.681 "trsvcid": "4420" 00:30:52.681 } 00:30:52.681 ], 00:30:52.681 "allow_any_host": true, 00:30:52.681 "hosts": [], 00:30:52.681 "serial_number": "SPDK00000000000001", 00:30:52.681 "model_number": "SPDK bdev Controller", 00:30:52.681 "max_namespaces": 1, 00:30:52.681 "min_cntlid": 1, 00:30:52.681 "max_cntlid": 65519, 00:30:52.681 "namespaces": [ 00:30:52.681 { 00:30:52.681 "nsid": 1, 00:30:52.681 "bdev_name": "Nvme0n1", 00:30:52.681 "name": "Nvme0n1", 00:30:52.681 "nguid": "6B6361D8B53B49D4A8D1D8345A5498ED", 00:30:52.681 "uuid": "6b6361d8-b53b-49d4-a8d1-d8345a5498ed" 00:30:52.681 } 00:30:52.681 ] 00:30:52.681 } 00:30:52.681 ] 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.681 18:20:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:52.681 18:20:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:52.681 18:20:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:52.681 18:20:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:30:52.681 18:20:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:52.681 18:20:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:52.681 18:20:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:52.681 18:20:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:52.681 18:20:15 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:30:52.681 18:20:15 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:52.681 18:20:15 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.681 18:20:15 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:52.681 18:20:15 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:52.681 18:20:15 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:52.681 18:20:15 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:30:52.681 18:20:15 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:52.681 18:20:15 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:30:52.681 18:20:15 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:52.681 18:20:15 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:52.681 rmmod nvme_tcp 00:30:52.681 rmmod nvme_fabrics 00:30:52.681 rmmod nvme_keyring 00:30:52.681 18:20:15 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:52.681 18:20:15 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:30:52.681 18:20:15 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:30:52.681 18:20:15 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1640401 ']' 00:30:52.681 18:20:15 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1640401 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1640401 ']' 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1640401 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1640401 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1640401' 00:30:52.681 killing process with pid 1640401 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1640401 00:30:52.681 18:20:15 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1640401 00:30:54.620 18:20:17 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:54.620 18:20:17 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:54.620 18:20:17 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:54.620 18:20:17 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:30:54.620 18:20:17 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:30:54.620 18:20:17 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:54.620 18:20:17 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:30:54.620 18:20:17 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:54.620 18:20:17 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:54.620 18:20:17 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.620 18:20:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:54.620 18:20:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.541 18:20:19 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:56.541 00:30:56.541 real 0m18.371s 00:30:56.541 user 0m26.645s 00:30:56.541 sys 0m3.249s 00:30:56.541 18:20:19 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:56.541 18:20:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:56.541 ************************************ 00:30:56.541 END TEST nvmf_identify_passthru 00:30:56.541 ************************************ 00:30:56.541 18:20:19 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:56.541 18:20:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:56.541 18:20:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:56.541 18:20:19 -- common/autotest_common.sh@10 -- # set +x 00:30:56.541 ************************************ 00:30:56.541 START TEST nvmf_dif 00:30:56.541 ************************************ 00:30:56.541 18:20:19 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:56.541 * Looking for test storage... 00:30:56.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:56.541 18:20:19 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:56.541 18:20:19 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:30:56.541 18:20:19 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:56.541 18:20:19 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:56.541 18:20:19 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:30:56.541 18:20:19 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:56.541 18:20:19 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:56.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.542 --rc genhtml_branch_coverage=1 00:30:56.542 --rc genhtml_function_coverage=1 00:30:56.542 --rc genhtml_legend=1 00:30:56.542 --rc geninfo_all_blocks=1 00:30:56.542 --rc geninfo_unexecuted_blocks=1 00:30:56.542 00:30:56.542 ' 00:30:56.542 18:20:19 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:56.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.542 --rc genhtml_branch_coverage=1 00:30:56.542 --rc genhtml_function_coverage=1 00:30:56.542 --rc genhtml_legend=1 00:30:56.542 --rc geninfo_all_blocks=1 00:30:56.542 --rc geninfo_unexecuted_blocks=1 00:30:56.542 00:30:56.542 ' 00:30:56.542 18:20:19 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:56.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.542 --rc genhtml_branch_coverage=1 00:30:56.542 --rc genhtml_function_coverage=1 00:30:56.542 --rc genhtml_legend=1 00:30:56.542 --rc geninfo_all_blocks=1 00:30:56.542 --rc geninfo_unexecuted_blocks=1 00:30:56.542 00:30:56.542 ' 00:30:56.542 18:20:19 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:56.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.542 --rc genhtml_branch_coverage=1 00:30:56.542 --rc genhtml_function_coverage=1 00:30:56.542 --rc genhtml_legend=1 00:30:56.542 --rc geninfo_all_blocks=1 00:30:56.542 --rc geninfo_unexecuted_blocks=1 00:30:56.542 00:30:56.542 ' 00:30:56.542 18:20:19 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:56.542 18:20:19 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:30:56.542 18:20:19 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.542 18:20:19 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.542 18:20:19 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.542 18:20:19 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.542 18:20:19 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.542 18:20:19 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.542 18:20:19 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:56.542 18:20:19 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:56.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:56.542 18:20:19 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:56.542 18:20:19 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:56.542 18:20:19 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:56.542 18:20:19 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:56.542 18:20:19 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.542 18:20:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:56.542 18:20:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:56.542 18:20:19 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:30:56.542 18:20:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:59.075 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:59.075 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:59.075 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:59.075 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:59.075 18:20:21 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:59.076 18:20:21 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:59.076 18:20:21 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:59.076 18:20:21 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:59.076 18:20:21 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:59.076 18:20:21 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:59.076 18:20:21 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:59.076 18:20:21 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:59.076 18:20:21 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:59.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:59.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:30:59.076 00:30:59.076 --- 10.0.0.2 ping statistics --- 00:30:59.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.076 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:30:59.076 18:20:21 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:59.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:59.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:30:59.076 00:30:59.076 --- 10.0.0.1 ping statistics --- 00:30:59.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.076 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:30:59.076 18:20:21 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:59.076 18:20:21 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:30:59.076 18:20:21 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:30:59.076 18:20:21 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:00.011 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:00.011 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:00.011 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:00.011 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:00.011 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:00.011 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:00.011 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:00.011 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:00.011 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:00.011 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:00.011 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:00.011 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:00.011 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:00.011 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:00.011 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:00.011 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:00.011 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:00.269 18:20:23 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:00.269 18:20:23 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:00.269 18:20:23 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:00.269 18:20:23 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:00.269 18:20:23 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:00.269 18:20:23 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:00.269 18:20:23 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:00.269 18:20:23 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:00.269 18:20:23 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:00.269 18:20:23 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:00.269 18:20:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:00.269 18:20:23 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1643676 00:31:00.269 18:20:23 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:00.269 18:20:23 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1643676 00:31:00.269 18:20:23 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1643676 ']' 00:31:00.269 18:20:23 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.269 18:20:23 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:00.269 18:20:23 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.269 18:20:23 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:00.269 18:20:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:00.269 [2024-12-09 18:20:23.145268] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:31:00.269 [2024-12-09 18:20:23.145347] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.269 [2024-12-09 18:20:23.216928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.269 [2024-12-09 18:20:23.272687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:00.269 [2024-12-09 18:20:23.272742] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:00.269 [2024-12-09 18:20:23.272772] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:00.269 [2024-12-09 18:20:23.272783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:00.269 [2024-12-09 18:20:23.272792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:00.269 [2024-12-09 18:20:23.273383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.527 18:20:23 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:00.527 18:20:23 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:31:00.527 18:20:23 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:00.527 18:20:23 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:00.527 18:20:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:00.528 18:20:23 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:00.528 18:20:23 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:00.528 18:20:23 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:00.528 18:20:23 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.528 18:20:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:00.528 [2024-12-09 18:20:23.416514] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:00.528 18:20:23 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.528 18:20:23 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:00.528 18:20:23 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:00.528 18:20:23 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:00.528 18:20:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:00.528 ************************************ 00:31:00.528 START TEST fio_dif_1_default 00:31:00.528 ************************************ 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:00.528 bdev_null0 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:00.528 [2024-12-09 18:20:23.480904] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:00.528 { 00:31:00.528 "params": { 00:31:00.528 "name": "Nvme$subsystem", 00:31:00.528 "trtype": "$TEST_TRANSPORT", 00:31:00.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:00.528 "adrfam": "ipv4", 00:31:00.528 "trsvcid": "$NVMF_PORT", 00:31:00.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:00.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:00.528 "hdgst": ${hdgst:-false}, 00:31:00.528 "ddgst": ${ddgst:-false} 00:31:00.528 }, 00:31:00.528 "method": "bdev_nvme_attach_controller" 00:31:00.528 } 00:31:00.528 EOF 00:31:00.528 )") 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:00.528 "params": { 00:31:00.528 "name": "Nvme0", 00:31:00.528 "trtype": "tcp", 00:31:00.528 "traddr": "10.0.0.2", 00:31:00.528 "adrfam": "ipv4", 00:31:00.528 "trsvcid": "4420", 00:31:00.528 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:00.528 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:00.528 "hdgst": false, 00:31:00.528 "ddgst": false 00:31:00.528 }, 00:31:00.528 "method": "bdev_nvme_attach_controller" 00:31:00.528 }' 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:00.528 18:20:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:00.840 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:00.840 fio-3.35 00:31:00.840 Starting 1 thread 00:31:13.033 00:31:13.033 filename0: (groupid=0, jobs=1): err= 0: pid=1643904: Mon Dec 9 18:20:34 2024 00:31:13.033 read: IOPS=212, BW=850KiB/s (870kB/s)(8528KiB/10034msec) 00:31:13.033 slat (nsec): min=6752, max=61626, avg=8976.03, stdev=3704.39 00:31:13.033 clat (usec): min=534, max=43843, avg=18797.43, stdev=20215.65 00:31:13.033 lat (usec): min=545, max=43877, avg=18806.41, stdev=20215.54 00:31:13.033 clat percentiles (usec): 00:31:13.033 | 1.00th=[ 578], 5.00th=[ 594], 10.00th=[ 603], 20.00th=[ 619], 00:31:13.033 | 30.00th=[ 652], 40.00th=[ 676], 50.00th=[ 693], 60.00th=[41157], 00:31:13.033 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:13.033 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:31:13.033 | 99.99th=[43779] 00:31:13.033 bw ( KiB/s): min= 704, max= 1280, per=100.00%, avg=851.20, stdev=142.50, samples=20 00:31:13.033 iops : min= 176, max= 320, avg=212.80, stdev=35.63, samples=20 00:31:13.033 lat (usec) : 750=55.11%, 1000=0.23% 00:31:13.033 lat (msec) : 50=44.65% 00:31:13.033 cpu : usr=91.35%, sys=8.35%, ctx=19, majf=0, minf=259 00:31:13.033 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:13.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.033 issued rwts: total=2132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.033 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:13.033 00:31:13.033 Run status group 0 (all jobs): 00:31:13.033 READ: bw=850KiB/s (870kB/s), 850KiB/s-850KiB/s (870kB/s-870kB/s), io=8528KiB (8733kB), run=10034-10034msec 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.033 00:31:13.033 real 0m11.346s 00:31:13.033 user 0m10.404s 00:31:13.033 sys 0m1.102s 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:13.033 ************************************ 00:31:13.033 END TEST fio_dif_1_default 00:31:13.033 ************************************ 00:31:13.033 18:20:34 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:13.033 18:20:34 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:13.033 18:20:34 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:13.033 18:20:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:13.033 ************************************ 00:31:13.033 START TEST fio_dif_1_multi_subsystems 00:31:13.033 ************************************ 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:13.033 bdev_null0 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:13.033 [2024-12-09 18:20:34.879960] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:13.033 bdev_null1 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:13.033 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:13.034 { 00:31:13.034 "params": { 00:31:13.034 "name": "Nvme$subsystem", 00:31:13.034 "trtype": "$TEST_TRANSPORT", 00:31:13.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:13.034 "adrfam": "ipv4", 00:31:13.034 "trsvcid": "$NVMF_PORT", 00:31:13.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:13.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:13.034 "hdgst": ${hdgst:-false}, 00:31:13.034 "ddgst": ${ddgst:-false} 00:31:13.034 }, 00:31:13.034 "method": "bdev_nvme_attach_controller" 00:31:13.034 } 00:31:13.034 EOF 00:31:13.034 )") 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:13.034 { 00:31:13.034 "params": { 00:31:13.034 "name": "Nvme$subsystem", 00:31:13.034 "trtype": "$TEST_TRANSPORT", 00:31:13.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:13.034 "adrfam": "ipv4", 00:31:13.034 "trsvcid": "$NVMF_PORT", 00:31:13.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:13.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:13.034 "hdgst": ${hdgst:-false}, 00:31:13.034 "ddgst": ${ddgst:-false} 00:31:13.034 }, 00:31:13.034 "method": "bdev_nvme_attach_controller" 00:31:13.034 } 00:31:13.034 EOF 00:31:13.034 )") 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:13.034 "params": { 00:31:13.034 "name": "Nvme0", 00:31:13.034 "trtype": "tcp", 00:31:13.034 "traddr": "10.0.0.2", 00:31:13.034 "adrfam": "ipv4", 00:31:13.034 "trsvcid": "4420", 00:31:13.034 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:13.034 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:13.034 "hdgst": false, 00:31:13.034 "ddgst": false 00:31:13.034 }, 00:31:13.034 "method": "bdev_nvme_attach_controller" 00:31:13.034 },{ 00:31:13.034 "params": { 00:31:13.034 "name": "Nvme1", 00:31:13.034 "trtype": "tcp", 00:31:13.034 "traddr": "10.0.0.2", 00:31:13.034 "adrfam": "ipv4", 00:31:13.034 "trsvcid": "4420", 00:31:13.034 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:13.034 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:13.034 "hdgst": false, 00:31:13.034 "ddgst": false 00:31:13.034 }, 00:31:13.034 "method": "bdev_nvme_attach_controller" 00:31:13.034 }' 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:13.034 18:20:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:13.034 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:13.034 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:13.034 fio-3.35 00:31:13.034 Starting 2 threads 00:31:23.005 00:31:23.005 filename0: (groupid=0, jobs=1): err= 0: pid=1645306: Mon Dec 9 18:20:46 2024 00:31:23.005 read: IOPS=196, BW=787KiB/s (806kB/s)(7904KiB/10040msec) 00:31:23.005 slat (usec): min=7, max=112, avg= 9.51, stdev= 4.64 00:31:23.005 clat (usec): min=579, max=45672, avg=20293.60, stdev=20292.33 00:31:23.005 lat (usec): min=587, max=45688, avg=20303.11, stdev=20292.46 00:31:23.005 clat percentiles (usec): 00:31:23.005 | 1.00th=[ 594], 5.00th=[ 627], 10.00th=[ 668], 20.00th=[ 717], 00:31:23.005 | 30.00th=[ 750], 40.00th=[ 783], 50.00th=[ 938], 60.00th=[41157], 00:31:23.005 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:31:23.005 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:31:23.005 | 99.99th=[45876] 00:31:23.005 bw ( KiB/s): min= 704, max= 896, per=50.10%, avg=788.80, stdev=53.22, samples=20 00:31:23.005 iops : min= 176, max= 224, avg=197.20, stdev=13.30, samples=20 00:31:23.005 lat (usec) : 750=28.54%, 1000=22.77% 00:31:23.005 lat (msec) : 2=0.51%, 50=48.18% 00:31:23.005 cpu : usr=94.88%, sys=4.80%, ctx=16, majf=0, minf=49 00:31:23.005 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:23.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.005 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.005 issued rwts: total=1976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.005 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:23.005 filename1: (groupid=0, jobs=1): err= 0: pid=1645307: Mon Dec 9 18:20:46 2024 00:31:23.005 read: IOPS=196, BW=787KiB/s (806kB/s)(7888KiB/10023msec) 00:31:23.005 slat (nsec): min=7011, max=33007, avg=9285.66, stdev=3797.51 00:31:23.005 clat (usec): min=561, max=45668, avg=20301.14, stdev=20332.85 00:31:23.005 lat (usec): min=568, max=45684, avg=20310.43, stdev=20332.72 00:31:23.005 clat percentiles (usec): 00:31:23.005 | 1.00th=[ 586], 5.00th=[ 603], 10.00th=[ 619], 20.00th=[ 635], 00:31:23.005 | 30.00th=[ 660], 40.00th=[ 693], 50.00th=[ 816], 60.00th=[41157], 00:31:23.005 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:23.005 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:31:23.005 | 99.99th=[45876] 00:31:23.005 bw ( KiB/s): min= 704, max= 960, per=50.03%, avg=787.20, stdev=55.33, samples=20 00:31:23.005 iops : min= 176, max= 240, avg=196.80, stdev=13.83, samples=20 00:31:23.005 lat (usec) : 750=48.02%, 1000=3.70% 00:31:23.005 lat (msec) : 50=48.28% 00:31:23.005 cpu : usr=94.28%, sys=5.40%, ctx=24, majf=0, minf=188 00:31:23.005 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:23.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.005 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.005 issued rwts: total=1972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.005 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:23.005 00:31:23.005 Run status group 0 (all jobs): 00:31:23.005 READ: bw=1573KiB/s (1611kB/s), 787KiB/s-787KiB/s (806kB/s-806kB/s), io=15.4MiB (16.2MB), run=10023-10040msec 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.572 00:31:23.572 real 0m11.544s 00:31:23.572 user 0m20.463s 00:31:23.572 sys 0m1.362s 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:23.572 18:20:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:23.572 ************************************ 00:31:23.572 END TEST fio_dif_1_multi_subsystems 00:31:23.572 ************************************ 00:31:23.572 18:20:46 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:23.572 18:20:46 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:23.572 18:20:46 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:23.572 18:20:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:23.572 ************************************ 00:31:23.572 START TEST fio_dif_rand_params 00:31:23.572 ************************************ 00:31:23.572 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:31:23.572 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:23.572 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:23.572 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:23.572 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:23.572 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:23.572 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:23.572 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:23.572 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:23.572 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:23.572 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:23.572 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:23.572 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:23.572 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:23.572 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.573 bdev_null0 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.573 [2024-12-09 18:20:46.473239] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:23.573 { 00:31:23.573 "params": { 00:31:23.573 "name": "Nvme$subsystem", 00:31:23.573 "trtype": "$TEST_TRANSPORT", 00:31:23.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.573 "adrfam": "ipv4", 00:31:23.573 "trsvcid": "$NVMF_PORT", 00:31:23.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.573 "hdgst": ${hdgst:-false}, 00:31:23.573 "ddgst": ${ddgst:-false} 00:31:23.573 }, 00:31:23.573 "method": "bdev_nvme_attach_controller" 00:31:23.573 } 00:31:23.573 EOF 00:31:23.573 )") 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:23.573 "params": { 00:31:23.573 "name": "Nvme0", 00:31:23.573 "trtype": "tcp", 00:31:23.573 "traddr": "10.0.0.2", 00:31:23.573 "adrfam": "ipv4", 00:31:23.573 "trsvcid": "4420", 00:31:23.573 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:23.573 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:23.573 "hdgst": false, 00:31:23.573 "ddgst": false 00:31:23.573 }, 00:31:23.573 "method": "bdev_nvme_attach_controller" 00:31:23.573 }' 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:23.573 18:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:23.832 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:23.832 ... 00:31:23.832 fio-3.35 00:31:23.832 Starting 3 threads 00:31:30.392 00:31:30.392 filename0: (groupid=0, jobs=1): err= 0: pid=1646708: Mon Dec 9 18:20:52 2024 00:31:30.392 read: IOPS=213, BW=26.7MiB/s (28.0MB/s)(135MiB/5045msec) 00:31:30.392 slat (usec): min=4, max=157, avg=18.78, stdev= 5.94 00:31:30.392 clat (usec): min=7234, max=47773, avg=13965.63, stdev=2701.45 00:31:30.392 lat (usec): min=7254, max=47793, avg=13984.41, stdev=2701.32 00:31:30.392 clat percentiles (usec): 00:31:30.392 | 1.00th=[ 8291], 5.00th=[ 9503], 10.00th=[10945], 20.00th=[11994], 00:31:30.392 | 30.00th=[12649], 40.00th=[13304], 50.00th=[13960], 60.00th=[14746], 00:31:30.392 | 70.00th=[15401], 80.00th=[16057], 90.00th=[16909], 95.00th=[17433], 00:31:30.392 | 99.00th=[18220], 99.50th=[18482], 99.90th=[45876], 99.95th=[47973], 00:31:30.392 | 99.99th=[47973] 00:31:30.392 bw ( KiB/s): min=25600, max=30208, per=31.69%, avg=27571.20, stdev=1236.89, samples=10 00:31:30.392 iops : min= 200, max= 236, avg=215.40, stdev= 9.66, samples=10 00:31:30.392 lat (msec) : 10=6.02%, 20=93.79%, 50=0.19% 00:31:30.392 cpu : usr=94.90%, sys=4.56%, ctx=9, majf=0, minf=132 00:31:30.392 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.392 issued rwts: total=1079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.392 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:30.392 filename0: (groupid=0, jobs=1): err= 0: pid=1646709: Mon Dec 9 18:20:52 2024 00:31:30.392 read: IOPS=234, BW=29.3MiB/s (30.7MB/s)(147MiB/5007msec) 00:31:30.392 slat (nsec): min=4551, max=56532, avg=16281.98, stdev=5123.59 00:31:30.392 clat (usec): min=6986, max=56615, avg=12774.60, stdev=4850.40 00:31:30.392 lat (usec): min=7005, max=56642, avg=12790.88, stdev=4850.23 00:31:30.392 clat percentiles (usec): 00:31:30.392 | 1.00th=[ 8717], 5.00th=[10028], 10.00th=[10421], 20.00th=[10945], 00:31:30.392 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12125], 60.00th=[12518], 00:31:30.392 | 70.00th=[12911], 80.00th=[13566], 90.00th=[14484], 95.00th=[15533], 00:31:30.392 | 99.00th=[51643], 99.50th=[53740], 99.90th=[56361], 99.95th=[56361], 00:31:30.392 | 99.99th=[56361] 00:31:30.392 bw ( KiB/s): min=26368, max=32512, per=34.46%, avg=29977.60, stdev=2105.68, samples=10 00:31:30.392 iops : min= 206, max= 254, avg=234.20, stdev=16.45, samples=10 00:31:30.392 lat (msec) : 10=5.20%, 20=93.53%, 100=1.28% 00:31:30.392 cpu : usr=94.19%, sys=5.31%, ctx=15, majf=0, minf=149 00:31:30.392 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.392 issued rwts: total=1174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.392 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:30.392 filename0: (groupid=0, jobs=1): err= 0: pid=1646710: Mon Dec 9 18:20:52 2024 00:31:30.392 read: IOPS=234, BW=29.4MiB/s (30.8MB/s)(147MiB/5005msec) 00:31:30.392 slat (nsec): min=7489, max=67152, avg=15978.59, stdev=4712.95 00:31:30.392 clat (usec): min=6647, max=54404, avg=12747.40, stdev=3427.32 00:31:30.392 lat (usec): min=6660, max=54419, avg=12763.38, stdev=3427.21 00:31:30.392 clat percentiles (usec): 00:31:30.392 | 1.00th=[ 7701], 5.00th=[ 9110], 10.00th=[10421], 20.00th=[11338], 00:31:30.392 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12387], 60.00th=[12911], 00:31:30.392 | 70.00th=[13435], 80.00th=[14091], 90.00th=[14877], 95.00th=[15795], 00:31:30.392 | 99.00th=[17171], 99.50th=[50594], 99.90th=[54264], 99.95th=[54264], 00:31:30.392 | 99.99th=[54264] 00:31:30.392 bw ( KiB/s): min=25088, max=33024, per=34.51%, avg=30022.50, stdev=2050.40, samples=10 00:31:30.392 iops : min= 196, max= 258, avg=234.50, stdev=15.98, samples=10 00:31:30.392 lat (msec) : 10=7.57%, 20=91.92%, 100=0.51% 00:31:30.392 cpu : usr=94.00%, sys=5.48%, ctx=15, majf=0, minf=150 00:31:30.392 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.392 issued rwts: total=1176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.392 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:30.392 00:31:30.392 Run status group 0 (all jobs): 00:31:30.392 READ: bw=85.0MiB/s (89.1MB/s), 26.7MiB/s-29.4MiB/s (28.0MB/s-30.8MB/s), io=429MiB (449MB), run=5005-5045msec 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.392 bdev_null0 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.392 [2024-12-09 18:20:52.825629] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.392 bdev_null1 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:30.392 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.393 bdev_null2 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:30.393 { 00:31:30.393 "params": { 00:31:30.393 "name": "Nvme$subsystem", 00:31:30.393 "trtype": "$TEST_TRANSPORT", 00:31:30.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.393 "adrfam": "ipv4", 00:31:30.393 "trsvcid": "$NVMF_PORT", 00:31:30.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.393 "hdgst": ${hdgst:-false}, 00:31:30.393 "ddgst": ${ddgst:-false} 00:31:30.393 }, 00:31:30.393 "method": "bdev_nvme_attach_controller" 00:31:30.393 } 00:31:30.393 EOF 00:31:30.393 )") 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:30.393 { 00:31:30.393 "params": { 00:31:30.393 "name": "Nvme$subsystem", 00:31:30.393 "trtype": "$TEST_TRANSPORT", 00:31:30.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.393 "adrfam": "ipv4", 00:31:30.393 "trsvcid": "$NVMF_PORT", 00:31:30.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.393 "hdgst": ${hdgst:-false}, 00:31:30.393 "ddgst": ${ddgst:-false} 00:31:30.393 }, 00:31:30.393 "method": "bdev_nvme_attach_controller" 00:31:30.393 } 00:31:30.393 EOF 00:31:30.393 )") 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:30.393 { 00:31:30.393 "params": { 00:31:30.393 "name": "Nvme$subsystem", 00:31:30.393 "trtype": "$TEST_TRANSPORT", 00:31:30.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.393 "adrfam": "ipv4", 00:31:30.393 "trsvcid": "$NVMF_PORT", 00:31:30.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.393 "hdgst": ${hdgst:-false}, 00:31:30.393 "ddgst": ${ddgst:-false} 00:31:30.393 }, 00:31:30.393 "method": "bdev_nvme_attach_controller" 00:31:30.393 } 00:31:30.393 EOF 00:31:30.393 )") 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:30.393 "params": { 00:31:30.393 "name": "Nvme0", 00:31:30.393 "trtype": "tcp", 00:31:30.393 "traddr": "10.0.0.2", 00:31:30.393 "adrfam": "ipv4", 00:31:30.393 "trsvcid": "4420", 00:31:30.393 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:30.393 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:30.393 "hdgst": false, 00:31:30.393 "ddgst": false 00:31:30.393 }, 00:31:30.393 "method": "bdev_nvme_attach_controller" 00:31:30.393 },{ 00:31:30.393 "params": { 00:31:30.393 "name": "Nvme1", 00:31:30.393 "trtype": "tcp", 00:31:30.393 "traddr": "10.0.0.2", 00:31:30.393 "adrfam": "ipv4", 00:31:30.393 "trsvcid": "4420", 00:31:30.393 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:30.393 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:30.393 "hdgst": false, 00:31:30.393 "ddgst": false 00:31:30.393 }, 00:31:30.393 "method": "bdev_nvme_attach_controller" 00:31:30.393 },{ 00:31:30.393 "params": { 00:31:30.393 "name": "Nvme2", 00:31:30.393 "trtype": "tcp", 00:31:30.393 "traddr": "10.0.0.2", 00:31:30.393 "adrfam": "ipv4", 00:31:30.393 "trsvcid": "4420", 00:31:30.393 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:30.393 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:30.393 "hdgst": false, 00:31:30.393 "ddgst": false 00:31:30.393 }, 00:31:30.393 "method": "bdev_nvme_attach_controller" 00:31:30.393 }' 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:30.393 18:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.393 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:30.393 ... 00:31:30.393 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:30.393 ... 00:31:30.393 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:30.393 ... 00:31:30.393 fio-3.35 00:31:30.393 Starting 24 threads 00:31:42.592 00:31:42.592 filename0: (groupid=0, jobs=1): err= 0: pid=1647569: Mon Dec 9 18:21:04 2024 00:31:42.592 read: IOPS=171, BW=685KiB/s (701kB/s)(6872KiB/10038msec) 00:31:42.592 slat (usec): min=8, max=104, avg=21.10, stdev=17.50 00:31:42.592 clat (msec): min=15, max=418, avg=93.32, stdev=113.95 00:31:42.592 lat (msec): min=15, max=418, avg=93.34, stdev=113.96 00:31:42.592 clat percentiles (msec): 00:31:42.592 | 1.00th=[ 20], 5.00th=[ 25], 10.00th=[ 34], 20.00th=[ 34], 00:31:42.592 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:31:42.592 | 70.00th=[ 35], 80.00th=[ 226], 90.00th=[ 309], 95.00th=[ 313], 00:31:42.592 | 99.00th=[ 418], 99.50th=[ 418], 99.90th=[ 418], 99.95th=[ 418], 00:31:42.592 | 99.99th=[ 418] 00:31:42.592 bw ( KiB/s): min= 128, max= 2080, per=4.31%, avg=680.80, stdev=757.34, samples=20 00:31:42.592 iops : min= 32, max= 520, avg=170.20, stdev=189.33, samples=20 00:31:42.592 lat (msec) : 20=1.63%, 50=75.09%, 100=0.93%, 250=5.12%, 500=17.23% 00:31:42.592 cpu : usr=98.70%, sys=0.89%, ctx=19, majf=0, minf=56 00:31:42.592 IO depths : 1=4.5%, 2=10.5%, 4=24.2%, 8=52.7%, 16=8.0%, 32=0.0%, >=64=0.0% 00:31:42.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.593 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.593 issued rwts: total=1718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.593 filename0: (groupid=0, jobs=1): err= 0: pid=1647570: Mon Dec 9 18:21:04 2024 00:31:42.593 read: IOPS=154, BW=619KiB/s (634kB/s)(6208KiB/10029msec) 00:31:42.593 slat (usec): min=4, max=110, avg=27.22, stdev=23.80 00:31:42.593 clat (msec): min=19, max=619, avg=103.14, stdev=145.86 00:31:42.593 lat (msec): min=19, max=619, avg=103.17, stdev=145.88 00:31:42.593 clat percentiles (msec): 00:31:42.593 | 1.00th=[ 22], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:42.593 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:31:42.593 | 70.00th=[ 35], 80.00th=[ 47], 90.00th=[ 414], 95.00th=[ 439], 00:31:42.593 | 99.00th=[ 456], 99.50th=[ 456], 99.90th=[ 617], 99.95th=[ 617], 00:31:42.593 | 99.99th=[ 617] 00:31:42.593 bw ( KiB/s): min= 128, max= 1920, per=4.05%, avg=640.00, stdev=760.89, samples=19 00:31:42.593 iops : min= 32, max= 480, avg=160.00, stdev=190.22, samples=19 00:31:42.593 lat (msec) : 20=0.19%, 50=79.96%, 100=1.29%, 250=0.39%, 500=17.78% 00:31:42.593 lat (msec) : 750=0.39% 00:31:42.593 cpu : usr=98.25%, sys=1.16%, ctx=60, majf=0, minf=37 00:31:42.593 IO depths : 1=3.9%, 2=10.1%, 4=24.8%, 8=52.6%, 16=8.6%, 32=0.0%, >=64=0.0% 00:31:42.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.593 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.593 issued rwts: total=1552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.593 filename0: (groupid=0, jobs=1): err= 0: pid=1647571: Mon Dec 9 18:21:04 2024 00:31:42.593 read: IOPS=154, BW=619KiB/s (634kB/s)(6208KiB/10030msec) 00:31:42.593 slat (usec): min=6, max=121, avg=31.07, stdev=16.48 00:31:42.593 clat (msec): min=24, max=555, avg=102.91, stdev=145.62 00:31:42.593 lat (msec): min=24, max=555, avg=102.94, stdev=145.62 00:31:42.593 clat percentiles (msec): 00:31:42.593 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:42.593 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:31:42.593 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 409], 95.00th=[ 443], 00:31:42.593 | 99.00th=[ 451], 99.50th=[ 456], 99.90th=[ 558], 99.95th=[ 558], 00:31:42.593 | 99.99th=[ 558] 00:31:42.593 bw ( KiB/s): min= 128, max= 1920, per=3.93%, avg=620.00, stdev=752.85, samples=20 00:31:42.593 iops : min= 32, max= 480, avg=155.00, stdev=188.21, samples=20 00:31:42.593 lat (msec) : 50=80.41%, 100=1.03%, 250=1.03%, 500=17.40%, 750=0.13% 00:31:42.593 cpu : usr=98.09%, sys=1.31%, ctx=12, majf=0, minf=38 00:31:42.593 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:42.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.593 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.593 issued rwts: total=1552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.593 filename0: (groupid=0, jobs=1): err= 0: pid=1647572: Mon Dec 9 18:21:04 2024 00:31:42.593 read: IOPS=154, BW=619KiB/s (634kB/s)(6208KiB/10030msec) 00:31:42.593 slat (usec): min=8, max=102, avg=47.04, stdev=19.58 00:31:42.593 clat (msec): min=32, max=554, avg=102.76, stdev=146.34 00:31:42.593 lat (msec): min=32, max=554, avg=102.80, stdev=146.33 00:31:42.593 clat percentiles (msec): 00:31:42.593 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:42.593 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:42.593 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 409], 95.00th=[ 443], 00:31:42.593 | 99.00th=[ 489], 99.50th=[ 535], 99.90th=[ 558], 99.95th=[ 558], 00:31:42.593 | 99.99th=[ 558] 00:31:42.593 bw ( KiB/s): min= 112, max= 1920, per=3.93%, avg=620.15, stdev=753.10, samples=20 00:31:42.593 iops : min= 28, max= 480, avg=155.00, stdev=188.22, samples=20 00:31:42.593 lat (msec) : 50=80.41%, 100=1.03%, 250=1.03%, 500=16.75%, 750=0.77% 00:31:42.593 cpu : usr=98.55%, sys=1.02%, ctx=14, majf=0, minf=45 00:31:42.593 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:31:42.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.593 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.593 issued rwts: total=1552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.593 filename0: (groupid=0, jobs=1): err= 0: pid=1647573: Mon Dec 9 18:21:04 2024 00:31:42.593 read: IOPS=171, BW=685KiB/s (702kB/s)(6896KiB/10065msec) 00:31:42.593 slat (usec): min=8, max=113, avg=44.61, stdev=27.86 00:31:42.593 clat (msec): min=15, max=484, avg=92.94, stdev=108.16 00:31:42.593 lat (msec): min=15, max=484, avg=92.98, stdev=108.14 00:31:42.593 clat percentiles (msec): 00:31:42.593 | 1.00th=[ 22], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:31:42.593 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:42.593 | 70.00th=[ 35], 80.00th=[ 249], 90.00th=[ 288], 95.00th=[ 309], 00:31:42.593 | 99.00th=[ 347], 99.50th=[ 430], 99.90th=[ 485], 99.95th=[ 485], 00:31:42.593 | 99.99th=[ 485] 00:31:42.593 bw ( KiB/s): min= 144, max= 1920, per=4.33%, avg=683.20, stdev=740.16, samples=20 00:31:42.593 iops : min= 36, max= 480, avg=170.80, stdev=185.04, samples=20 00:31:42.593 lat (msec) : 20=0.93%, 50=75.17%, 250=3.94%, 500=19.95% 00:31:42.593 cpu : usr=98.21%, sys=1.36%, ctx=13, majf=0, minf=36 00:31:42.593 IO depths : 1=4.6%, 2=9.7%, 4=21.5%, 8=56.2%, 16=7.9%, 32=0.0%, >=64=0.0% 00:31:42.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.593 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.593 issued rwts: total=1724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.593 filename0: (groupid=0, jobs=1): err= 0: pid=1647574: Mon Dec 9 18:21:04 2024 00:31:42.593 read: IOPS=167, BW=670KiB/s (686kB/s)(6720KiB/10029msec) 00:31:42.593 slat (nsec): min=8171, max=65332, avg=25722.17, stdev=11407.82 00:31:42.593 clat (msec): min=32, max=396, avg=95.06, stdev=106.23 00:31:42.593 lat (msec): min=32, max=396, avg=95.09, stdev=106.22 00:31:42.593 clat percentiles (msec): 00:31:42.593 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:42.593 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:31:42.593 | 70.00th=[ 35], 80.00th=[ 241], 90.00th=[ 300], 95.00th=[ 309], 00:31:42.593 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 397], 99.95th=[ 397], 00:31:42.593 | 99.99th=[ 397] 00:31:42.593 bw ( KiB/s): min= 144, max= 1920, per=4.40%, avg=693.05, stdev=715.90, samples=19 00:31:42.593 iops : min= 36, max= 480, avg=173.26, stdev=178.98, samples=19 00:31:42.593 lat (msec) : 50=73.33%, 100=1.90%, 250=7.02%, 500=17.74% 00:31:42.593 cpu : usr=98.47%, sys=1.02%, ctx=23, majf=0, minf=35 00:31:42.593 IO depths : 1=4.9%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:31:42.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.593 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.593 issued rwts: total=1680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.593 filename0: (groupid=0, jobs=1): err= 0: pid=1647575: Mon Dec 9 18:21:04 2024 00:31:42.593 read: IOPS=172, BW=691KiB/s (708kB/s)(6968KiB/10077msec) 00:31:42.593 slat (usec): min=8, max=113, avg=49.35, stdev=31.15 00:31:42.593 clat (msec): min=14, max=440, avg=91.80, stdev=107.55 00:31:42.593 lat (msec): min=14, max=440, avg=91.85, stdev=107.53 00:31:42.593 clat percentiles (msec): 00:31:42.593 | 1.00th=[ 16], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:31:42.593 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:42.593 | 70.00th=[ 35], 80.00th=[ 247], 90.00th=[ 279], 95.00th=[ 309], 00:31:42.593 | 99.00th=[ 359], 99.50th=[ 405], 99.90th=[ 439], 99.95th=[ 439], 00:31:42.593 | 99.99th=[ 439] 00:31:42.593 bw ( KiB/s): min= 176, max= 2052, per=4.38%, avg=690.60, stdev=751.70, samples=20 00:31:42.593 iops : min= 44, max= 513, avg=172.65, stdev=187.92, samples=20 00:31:42.593 lat (msec) : 20=2.58%, 50=73.65%, 250=3.90%, 500=19.86% 00:31:42.593 cpu : usr=98.22%, sys=1.28%, ctx=28, majf=0, minf=92 00:31:42.593 IO depths : 1=4.9%, 2=9.8%, 4=21.0%, 8=56.7%, 16=7.7%, 32=0.0%, >=64=0.0% 00:31:42.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.593 complete : 0=0.0%, 4=92.9%, 8=1.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.593 issued rwts: total=1742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.593 filename0: (groupid=0, jobs=1): err= 0: pid=1647576: Mon Dec 9 18:21:04 2024 00:31:42.593 read: IOPS=156, BW=625KiB/s (640kB/s)(6272KiB/10035msec) 00:31:42.593 slat (usec): min=8, max=100, avg=27.36, stdev=14.27 00:31:42.593 clat (msec): min=32, max=556, avg=102.13, stdev=147.00 00:31:42.593 lat (msec): min=32, max=556, avg=102.15, stdev=146.99 00:31:42.593 clat percentiles (msec): 00:31:42.593 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:31:42.593 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:31:42.593 | 70.00th=[ 35], 80.00th=[ 51], 90.00th=[ 409], 95.00th=[ 447], 00:31:42.593 | 99.00th=[ 498], 99.50th=[ 542], 99.90th=[ 558], 99.95th=[ 558], 00:31:42.593 | 99.99th=[ 558] 00:31:42.593 bw ( KiB/s): min= 128, max= 1920, per=3.93%, avg=620.95, stdev=752.53, samples=20 00:31:42.593 iops : min= 32, max= 480, avg=155.20, stdev=188.08, samples=20 00:31:42.593 lat (msec) : 50=80.29%, 100=1.34%, 250=1.02%, 500=16.45%, 750=0.89% 00:31:42.593 cpu : usr=98.40%, sys=1.12%, ctx=17, majf=0, minf=45 00:31:42.593 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:31:42.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.593 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.593 issued rwts: total=1568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.593 filename1: (groupid=0, jobs=1): err= 0: pid=1647577: Mon Dec 9 18:21:04 2024 00:31:42.593 read: IOPS=170, BW=682KiB/s (698kB/s)(6864KiB/10067msec) 00:31:42.593 slat (usec): min=8, max=115, avg=57.97, stdev=29.65 00:31:42.593 clat (msec): min=16, max=433, avg=93.11, stdev=110.87 00:31:42.593 lat (msec): min=16, max=433, avg=93.17, stdev=110.85 00:31:42.593 clat percentiles (msec): 00:31:42.593 | 1.00th=[ 22], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:31:42.593 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:42.593 | 70.00th=[ 35], 80.00th=[ 241], 90.00th=[ 305], 95.00th=[ 309], 00:31:42.593 | 99.00th=[ 422], 99.50th=[ 430], 99.90th=[ 435], 99.95th=[ 435], 00:31:42.594 | 99.99th=[ 435] 00:31:42.594 bw ( KiB/s): min= 128, max= 1920, per=4.31%, avg=680.00, stdev=742.61, samples=20 00:31:42.594 iops : min= 32, max= 480, avg=170.00, stdev=185.65, samples=20 00:31:42.594 lat (msec) : 20=0.93%, 50=75.52%, 250=5.24%, 500=18.30% 00:31:42.594 cpu : usr=98.47%, sys=1.11%, ctx=13, majf=0, minf=42 00:31:42.594 IO depths : 1=4.4%, 2=9.9%, 4=22.6%, 8=55.0%, 16=8.2%, 32=0.0%, >=64=0.0% 00:31:42.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.594 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.594 issued rwts: total=1716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.594 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.594 filename1: (groupid=0, jobs=1): err= 0: pid=1647578: Mon Dec 9 18:21:04 2024 00:31:42.594 read: IOPS=173, BW=692KiB/s (709kB/s)(6968KiB/10065msec) 00:31:42.594 slat (usec): min=8, max=193, avg=52.15, stdev=28.62 00:31:42.594 clat (msec): min=15, max=391, avg=91.78, stdev=105.46 00:31:42.594 lat (msec): min=15, max=392, avg=91.84, stdev=105.44 00:31:42.594 clat percentiles (msec): 00:31:42.594 | 1.00th=[ 17], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:31:42.594 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:42.594 | 70.00th=[ 35], 80.00th=[ 226], 90.00th=[ 300], 95.00th=[ 309], 00:31:42.594 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 393], 99.95th=[ 393], 00:31:42.594 | 99.99th=[ 393] 00:31:42.594 bw ( KiB/s): min= 144, max= 2032, per=4.38%, avg=690.40, stdev=742.60, samples=20 00:31:42.594 iops : min= 36, max= 508, avg=172.60, stdev=185.65, samples=20 00:31:42.594 lat (msec) : 20=2.53%, 50=72.68%, 100=0.92%, 250=6.77%, 500=17.11% 00:31:42.594 cpu : usr=98.48%, sys=1.08%, ctx=19, majf=0, minf=47 00:31:42.594 IO depths : 1=4.6%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:31:42.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.594 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.594 issued rwts: total=1742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.594 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.594 filename1: (groupid=0, jobs=1): err= 0: pid=1647579: Mon Dec 9 18:21:04 2024 00:31:42.594 read: IOPS=171, BW=685KiB/s (702kB/s)(6896KiB/10065msec) 00:31:42.594 slat (usec): min=8, max=101, avg=26.68, stdev=19.83 00:31:42.594 clat (msec): min=15, max=422, avg=93.02, stdev=107.86 00:31:42.594 lat (msec): min=15, max=422, avg=93.04, stdev=107.86 00:31:42.594 clat percentiles (msec): 00:31:42.594 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:31:42.594 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:31:42.594 | 70.00th=[ 35], 80.00th=[ 257], 90.00th=[ 288], 95.00th=[ 309], 00:31:42.594 | 99.00th=[ 397], 99.50th=[ 401], 99.90th=[ 422], 99.95th=[ 422], 00:31:42.594 | 99.99th=[ 422] 00:31:42.594 bw ( KiB/s): min= 176, max= 1920, per=4.33%, avg=683.20, stdev=740.36, samples=20 00:31:42.594 iops : min= 44, max= 480, avg=170.80, stdev=185.09, samples=20 00:31:42.594 lat (msec) : 20=0.93%, 50=75.17%, 250=2.55%, 500=21.35% 00:31:42.594 cpu : usr=98.09%, sys=1.22%, ctx=65, majf=0, minf=41 00:31:42.594 IO depths : 1=4.9%, 2=9.9%, 4=21.2%, 8=56.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:31:42.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.594 complete : 0=0.0%, 4=92.9%, 8=1.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.594 issued rwts: total=1724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.594 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.594 filename1: (groupid=0, jobs=1): err= 0: pid=1647580: Mon Dec 9 18:21:04 2024 00:31:42.594 read: IOPS=175, BW=704KiB/s (721kB/s)(7096KiB/10083msec) 00:31:42.594 slat (usec): min=6, max=153, avg=34.20, stdev=21.02 00:31:42.594 clat (msec): min=9, max=386, avg=90.44, stdev=104.99 00:31:42.594 lat (msec): min=9, max=386, avg=90.48, stdev=104.98 00:31:42.594 clat percentiles (msec): 00:31:42.594 | 1.00th=[ 11], 5.00th=[ 25], 10.00th=[ 33], 20.00th=[ 34], 00:31:42.594 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:42.594 | 70.00th=[ 35], 80.00th=[ 226], 90.00th=[ 300], 95.00th=[ 309], 00:31:42.594 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 388], 99.95th=[ 388], 00:31:42.594 | 99.99th=[ 388] 00:31:42.594 bw ( KiB/s): min= 144, max= 2176, per=4.46%, avg=703.20, stdev=765.74, samples=20 00:31:42.594 iops : min= 36, max= 544, avg=175.80, stdev=191.44, samples=20 00:31:42.594 lat (msec) : 10=0.73%, 20=3.55%, 50=71.36%, 100=0.90%, 250=6.65% 00:31:42.594 lat (msec) : 500=16.80% 00:31:42.594 cpu : usr=98.31%, sys=1.12%, ctx=44, majf=0, minf=68 00:31:42.594 IO depths : 1=3.9%, 2=10.1%, 4=24.8%, 8=52.6%, 16=8.5%, 32=0.0%, >=64=0.0% 00:31:42.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.594 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.594 issued rwts: total=1774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.594 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.594 filename1: (groupid=0, jobs=1): err= 0: pid=1647581: Mon Dec 9 18:21:04 2024 00:31:42.594 read: IOPS=156, BW=627KiB/s (642kB/s)(6272KiB/10008msec) 00:31:42.594 slat (usec): min=4, max=120, avg=30.35, stdev=14.60 00:31:42.594 clat (msec): min=32, max=557, avg=101.84, stdev=144.26 00:31:42.594 lat (msec): min=32, max=557, avg=101.87, stdev=144.26 00:31:42.594 clat percentiles (msec): 00:31:42.594 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:31:42.594 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:31:42.594 | 70.00th=[ 35], 80.00th=[ 52], 90.00th=[ 388], 95.00th=[ 443], 00:31:42.594 | 99.00th=[ 464], 99.50th=[ 518], 99.90th=[ 558], 99.95th=[ 558], 00:31:42.594 | 99.99th=[ 558] 00:31:42.594 bw ( KiB/s): min= 128, max= 1920, per=3.93%, avg=620.80, stdev=752.06, samples=20 00:31:42.594 iops : min= 32, max= 480, avg=155.20, stdev=188.01, samples=20 00:31:42.594 lat (msec) : 50=79.59%, 100=2.04%, 250=1.02%, 500=16.58%, 750=0.77% 00:31:42.594 cpu : usr=98.40%, sys=1.08%, ctx=46, majf=0, minf=46 00:31:42.594 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:31:42.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.594 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.594 issued rwts: total=1568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.594 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.594 filename1: (groupid=0, jobs=1): err= 0: pid=1647582: Mon Dec 9 18:21:04 2024 00:31:42.594 read: IOPS=155, BW=620KiB/s (635kB/s)(6208KiB/10005msec) 00:31:42.594 slat (nsec): min=8582, max=98982, avg=43853.81, stdev=21992.51 00:31:42.594 clat (msec): min=20, max=453, avg=102.75, stdev=144.80 00:31:42.594 lat (msec): min=21, max=453, avg=102.79, stdev=144.79 00:31:42.594 clat percentiles (msec): 00:31:42.594 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:31:42.594 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:42.594 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 409], 95.00th=[ 443], 00:31:42.594 | 99.00th=[ 451], 99.50th=[ 456], 99.90th=[ 456], 99.95th=[ 456], 00:31:42.594 | 99.99th=[ 456] 00:31:42.594 bw ( KiB/s): min= 128, max= 1920, per=4.06%, avg=640.16, stdev=761.08, samples=19 00:31:42.594 iops : min= 32, max= 480, avg=160.00, stdev=190.21, samples=19 00:31:42.594 lat (msec) : 50=80.41%, 100=1.03%, 500=18.56% 00:31:42.594 cpu : usr=98.56%, sys=0.92%, ctx=38, majf=0, minf=39 00:31:42.594 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:42.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.594 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.594 issued rwts: total=1552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.594 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.594 filename1: (groupid=0, jobs=1): err= 0: pid=1647583: Mon Dec 9 18:21:04 2024 00:31:42.594 read: IOPS=168, BW=674KiB/s (690kB/s)(6776KiB/10049msec) 00:31:42.594 slat (usec): min=8, max=109, avg=46.43, stdev=26.58 00:31:42.594 clat (msec): min=24, max=448, avg=94.23, stdev=108.23 00:31:42.594 lat (msec): min=24, max=448, avg=94.28, stdev=108.21 00:31:42.594 clat percentiles (msec): 00:31:42.594 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:31:42.594 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:42.594 | 70.00th=[ 35], 80.00th=[ 239], 90.00th=[ 296], 95.00th=[ 305], 00:31:42.594 | 99.00th=[ 342], 99.50th=[ 401], 99.90th=[ 447], 99.95th=[ 447], 00:31:42.594 | 99.99th=[ 447] 00:31:42.594 bw ( KiB/s): min= 144, max= 1920, per=4.28%, avg=675.20, stdev=722.26, samples=20 00:31:42.594 iops : min= 36, max= 480, avg=168.80, stdev=180.57, samples=20 00:31:42.594 lat (msec) : 50=75.56%, 250=5.67%, 500=18.77% 00:31:42.594 cpu : usr=98.37%, sys=1.09%, ctx=43, majf=0, minf=48 00:31:42.594 IO depths : 1=4.4%, 2=9.9%, 4=22.4%, 8=55.1%, 16=8.1%, 32=0.0%, >=64=0.0% 00:31:42.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.594 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.594 issued rwts: total=1694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.594 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.594 filename1: (groupid=0, jobs=1): err= 0: pid=1647584: Mon Dec 9 18:21:04 2024 00:31:42.594 read: IOPS=168, BW=674KiB/s (690kB/s)(6768KiB/10039msec) 00:31:42.594 slat (usec): min=5, max=112, avg=58.09, stdev=27.51 00:31:42.594 clat (msec): min=31, max=442, avg=94.30, stdev=109.03 00:31:42.594 lat (msec): min=31, max=442, avg=94.36, stdev=109.02 00:31:42.594 clat percentiles (msec): 00:31:42.594 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:31:42.594 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:42.594 | 70.00th=[ 35], 80.00th=[ 241], 90.00th=[ 296], 95.00th=[ 309], 00:31:42.594 | 99.00th=[ 359], 99.50th=[ 439], 99.90th=[ 443], 99.95th=[ 443], 00:31:42.594 | 99.99th=[ 443] 00:31:42.594 bw ( KiB/s): min= 128, max= 1920, per=4.26%, avg=672.80, stdev=716.77, samples=20 00:31:42.594 iops : min= 32, max= 480, avg=168.20, stdev=179.19, samples=20 00:31:42.594 lat (msec) : 50=75.65%, 250=5.56%, 500=18.79% 00:31:42.594 cpu : usr=98.39%, sys=1.14%, ctx=24, majf=0, minf=50 00:31:42.594 IO depths : 1=5.3%, 2=10.7%, 4=22.5%, 8=54.3%, 16=7.3%, 32=0.0%, >=64=0.0% 00:31:42.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.594 complete : 0=0.0%, 4=93.3%, 8=0.9%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.594 issued rwts: total=1692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.594 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.594 filename2: (groupid=0, jobs=1): err= 0: pid=1647585: Mon Dec 9 18:21:04 2024 00:31:42.594 read: IOPS=171, BW=687KiB/s (703kB/s)(6912KiB/10063msec) 00:31:42.594 slat (usec): min=8, max=115, avg=50.85, stdev=30.09 00:31:42.594 clat (msec): min=15, max=464, avg=92.52, stdev=106.89 00:31:42.594 lat (msec): min=15, max=464, avg=92.57, stdev=106.87 00:31:42.594 clat percentiles (msec): 00:31:42.594 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:31:42.595 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:42.595 | 70.00th=[ 35], 80.00th=[ 226], 90.00th=[ 305], 95.00th=[ 309], 00:31:42.595 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 464], 99.95th=[ 464], 00:31:42.595 | 99.99th=[ 464] 00:31:42.595 bw ( KiB/s): min= 144, max= 1923, per=4.34%, avg=684.95, stdev=739.60, samples=20 00:31:42.595 iops : min= 36, max= 480, avg=171.20, stdev=184.83, samples=20 00:31:42.595 lat (msec) : 20=0.93%, 50=75.00%, 250=6.83%, 500=17.25% 00:31:42.595 cpu : usr=98.34%, sys=1.20%, ctx=15, majf=0, minf=31 00:31:42.595 IO depths : 1=4.8%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:31:42.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.595 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.595 issued rwts: total=1728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.595 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.595 filename2: (groupid=0, jobs=1): err= 0: pid=1647586: Mon Dec 9 18:21:04 2024 00:31:42.595 read: IOPS=168, BW=672KiB/s (688kB/s)(6744KiB/10034msec) 00:31:42.595 slat (usec): min=8, max=116, avg=58.16, stdev=29.43 00:31:42.595 clat (msec): min=31, max=395, avg=94.68, stdev=108.97 00:31:42.595 lat (msec): min=31, max=395, avg=94.74, stdev=108.95 00:31:42.595 clat percentiles (msec): 00:31:42.595 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:31:42.595 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:42.595 | 70.00th=[ 35], 80.00th=[ 239], 90.00th=[ 296], 95.00th=[ 309], 00:31:42.595 | 99.00th=[ 384], 99.50th=[ 384], 99.90th=[ 397], 99.95th=[ 397], 00:31:42.595 | 99.99th=[ 397] 00:31:42.595 bw ( KiB/s): min= 128, max= 1920, per=4.24%, avg=668.15, stdev=721.76, samples=20 00:31:42.595 iops : min= 32, max= 480, avg=167.00, stdev=180.38, samples=20 00:31:42.595 lat (msec) : 50=74.02%, 100=1.54%, 250=7.24%, 500=17.20% 00:31:42.595 cpu : usr=98.34%, sys=1.24%, ctx=16, majf=0, minf=37 00:31:42.595 IO depths : 1=4.9%, 2=10.7%, 4=23.5%, 8=53.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:31:42.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.595 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.595 issued rwts: total=1686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.595 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.595 filename2: (groupid=0, jobs=1): err= 0: pid=1647587: Mon Dec 9 18:21:04 2024 00:31:42.595 read: IOPS=176, BW=705KiB/s (722kB/s)(7104KiB/10078msec) 00:31:42.595 slat (usec): min=6, max=167, avg=26.88, stdev=16.09 00:31:42.595 clat (msec): min=5, max=360, avg=90.33, stdev=104.94 00:31:42.595 lat (msec): min=5, max=360, avg=90.35, stdev=104.93 00:31:42.595 clat percentiles (msec): 00:31:42.595 | 1.00th=[ 11], 5.00th=[ 31], 10.00th=[ 34], 20.00th=[ 34], 00:31:42.595 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:31:42.595 | 70.00th=[ 35], 80.00th=[ 226], 90.00th=[ 300], 95.00th=[ 309], 00:31:42.595 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 363], 99.95th=[ 363], 00:31:42.595 | 99.99th=[ 363] 00:31:42.595 bw ( KiB/s): min= 144, max= 2176, per=4.47%, avg=704.00, stdev=767.07, samples=20 00:31:42.595 iops : min= 36, max= 544, avg=176.00, stdev=191.77, samples=20 00:31:42.595 lat (msec) : 10=1.07%, 20=3.43%, 50=71.17%, 100=0.90%, 250=6.64% 00:31:42.595 lat (msec) : 500=16.78% 00:31:42.595 cpu : usr=98.28%, sys=1.29%, ctx=14, majf=0, minf=45 00:31:42.595 IO depths : 1=4.8%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:31:42.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.595 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.595 issued rwts: total=1776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.595 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.595 filename2: (groupid=0, jobs=1): err= 0: pid=1647588: Mon Dec 9 18:21:04 2024 00:31:42.595 read: IOPS=157, BW=632KiB/s (647kB/s)(6320KiB/10006msec) 00:31:42.595 slat (usec): min=8, max=119, avg=58.91, stdev=24.05 00:31:42.595 clat (msec): min=16, max=550, avg=101.00, stdev=137.55 00:31:42.595 lat (msec): min=16, max=550, avg=101.06, stdev=137.53 00:31:42.595 clat percentiles (msec): 00:31:42.595 | 1.00th=[ 23], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 34], 00:31:42.595 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:42.595 | 70.00th=[ 39], 80.00th=[ 87], 90.00th=[ 388], 95.00th=[ 422], 00:31:42.595 | 99.00th=[ 456], 99.50th=[ 506], 99.90th=[ 550], 99.95th=[ 550], 00:31:42.595 | 99.99th=[ 550] 00:31:42.595 bw ( KiB/s): min= 128, max= 1904, per=4.13%, avg=651.95, stdev=747.46, samples=19 00:31:42.595 iops : min= 32, max= 476, avg=162.95, stdev=186.81, samples=19 00:31:42.595 lat (msec) : 20=0.51%, 50=77.47%, 100=2.03%, 250=0.38%, 500=19.11% 00:31:42.595 lat (msec) : 750=0.51% 00:31:42.595 cpu : usr=98.57%, sys=1.00%, ctx=13, majf=0, minf=25 00:31:42.595 IO depths : 1=0.5%, 2=3.3%, 4=12.3%, 8=69.9%, 16=14.0%, 32=0.0%, >=64=0.0% 00:31:42.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.595 complete : 0=0.0%, 4=91.3%, 8=5.1%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.595 issued rwts: total=1580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.595 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.595 filename2: (groupid=0, jobs=1): err= 0: pid=1647589: Mon Dec 9 18:21:04 2024 00:31:42.595 read: IOPS=169, BW=680KiB/s (696kB/s)(6840KiB/10060msec) 00:31:42.595 slat (nsec): min=8217, max=84736, avg=27591.01, stdev=14140.02 00:31:42.595 clat (msec): min=19, max=455, avg=93.58, stdev=108.22 00:31:42.595 lat (msec): min=19, max=455, avg=93.61, stdev=108.21 00:31:42.595 clat percentiles (msec): 00:31:42.595 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:42.595 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:31:42.595 | 70.00th=[ 35], 80.00th=[ 236], 90.00th=[ 296], 95.00th=[ 305], 00:31:42.595 | 99.00th=[ 359], 99.50th=[ 397], 99.90th=[ 456], 99.95th=[ 456], 00:31:42.595 | 99.99th=[ 456] 00:31:42.595 bw ( KiB/s): min= 128, max= 1920, per=4.32%, avg=681.60, stdev=734.28, samples=20 00:31:42.595 iops : min= 32, max= 480, avg=170.40, stdev=183.57, samples=20 00:31:42.595 lat (msec) : 20=0.29%, 50=75.50%, 250=5.73%, 500=18.48% 00:31:42.595 cpu : usr=97.75%, sys=1.51%, ctx=113, majf=0, minf=56 00:31:42.595 IO depths : 1=5.0%, 2=10.2%, 4=21.8%, 8=55.5%, 16=7.6%, 32=0.0%, >=64=0.0% 00:31:42.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.595 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.595 issued rwts: total=1710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.595 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.595 filename2: (groupid=0, jobs=1): err= 0: pid=1647590: Mon Dec 9 18:21:04 2024 00:31:42.595 read: IOPS=159, BW=639KiB/s (654kB/s)(6400KiB/10022msec) 00:31:42.595 slat (usec): min=8, max=123, avg=40.94, stdev=26.32 00:31:42.595 clat (msec): min=24, max=502, avg=99.87, stdev=135.42 00:31:42.595 lat (msec): min=24, max=502, avg=99.91, stdev=135.43 00:31:42.595 clat percentiles (msec): 00:31:42.595 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:42.595 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:42.595 | 70.00th=[ 35], 80.00th=[ 44], 90.00th=[ 388], 95.00th=[ 414], 00:31:42.595 | 99.00th=[ 451], 99.50th=[ 451], 99.90th=[ 502], 99.95th=[ 502], 00:31:42.595 | 99.99th=[ 502] 00:31:42.595 bw ( KiB/s): min= 128, max= 1920, per=4.02%, avg=633.60, stdev=751.41, samples=20 00:31:42.595 iops : min= 32, max= 480, avg=158.40, stdev=187.85, samples=20 00:31:42.595 lat (msec) : 50=80.00%, 250=2.75%, 500=17.12%, 750=0.12% 00:31:42.595 cpu : usr=98.28%, sys=1.28%, ctx=8, majf=0, minf=47 00:31:42.595 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:31:42.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.595 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.595 issued rwts: total=1600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.595 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.595 filename2: (groupid=0, jobs=1): err= 0: pid=1647591: Mon Dec 9 18:21:04 2024 00:31:42.595 read: IOPS=155, BW=620KiB/s (635kB/s)(6208KiB/10006msec) 00:31:42.595 slat (nsec): min=9023, max=97729, avg=40888.94, stdev=18664.41 00:31:42.595 clat (msec): min=32, max=533, avg=102.77, stdev=145.68 00:31:42.595 lat (msec): min=32, max=533, avg=102.81, stdev=145.67 00:31:42.595 clat percentiles (msec): 00:31:42.595 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:42.595 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:42.595 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 409], 95.00th=[ 443], 00:31:42.595 | 99.00th=[ 456], 99.50th=[ 456], 99.90th=[ 535], 99.95th=[ 535], 00:31:42.595 | 99.99th=[ 535] 00:31:42.595 bw ( KiB/s): min= 128, max= 1920, per=4.06%, avg=640.16, stdev=767.20, samples=19 00:31:42.595 iops : min= 32, max= 480, avg=160.00, stdev=191.75, samples=19 00:31:42.595 lat (msec) : 50=80.41%, 100=1.03%, 250=0.90%, 500=17.27%, 750=0.39% 00:31:42.595 cpu : usr=98.53%, sys=1.05%, ctx=16, majf=0, minf=37 00:31:42.595 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:42.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.595 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.595 issued rwts: total=1552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.595 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.595 filename2: (groupid=0, jobs=1): err= 0: pid=1647592: Mon Dec 9 18:21:04 2024 00:31:42.595 read: IOPS=155, BW=620KiB/s (635kB/s)(6208KiB/10006msec) 00:31:42.595 slat (usec): min=9, max=105, avg=59.99, stdev=21.46 00:31:42.595 clat (msec): min=31, max=524, avg=102.62, stdev=145.56 00:31:42.595 lat (msec): min=31, max=525, avg=102.68, stdev=145.54 00:31:42.595 clat percentiles (msec): 00:31:42.595 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:31:42.595 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:42.595 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 409], 95.00th=[ 443], 00:31:42.595 | 99.00th=[ 456], 99.50th=[ 456], 99.90th=[ 527], 99.95th=[ 527], 00:31:42.595 | 99.99th=[ 527] 00:31:42.595 bw ( KiB/s): min= 128, max= 1920, per=4.06%, avg=640.16, stdev=767.20, samples=19 00:31:42.595 iops : min= 32, max= 480, avg=160.00, stdev=191.75, samples=19 00:31:42.595 lat (msec) : 50=80.41%, 100=1.03%, 250=0.90%, 500=17.53%, 750=0.13% 00:31:42.595 cpu : usr=98.35%, sys=1.22%, ctx=14, majf=0, minf=38 00:31:42.595 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:42.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.595 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.595 issued rwts: total=1552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.595 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:42.595 00:31:42.595 Run status group 0 (all jobs): 00:31:42.595 READ: bw=15.4MiB/s (16.1MB/s), 619KiB/s-705KiB/s (634kB/s-722kB/s), io=155MiB (163MB), run=10005-10083msec 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.596 bdev_null0 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.596 [2024-12-09 18:21:04.797484] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.596 bdev_null1 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:42.596 { 00:31:42.596 "params": { 00:31:42.596 "name": "Nvme$subsystem", 00:31:42.596 "trtype": "$TEST_TRANSPORT", 00:31:42.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:42.596 "adrfam": "ipv4", 00:31:42.596 "trsvcid": "$NVMF_PORT", 00:31:42.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:42.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:42.596 "hdgst": ${hdgst:-false}, 00:31:42.596 "ddgst": ${ddgst:-false} 00:31:42.596 }, 00:31:42.596 "method": "bdev_nvme_attach_controller" 00:31:42.596 } 00:31:42.596 EOF 00:31:42.596 )") 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:42.596 18:21:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:42.596 { 00:31:42.596 "params": { 00:31:42.596 "name": "Nvme$subsystem", 00:31:42.596 "trtype": "$TEST_TRANSPORT", 00:31:42.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:42.596 "adrfam": "ipv4", 00:31:42.597 "trsvcid": "$NVMF_PORT", 00:31:42.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:42.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:42.597 "hdgst": ${hdgst:-false}, 00:31:42.597 "ddgst": ${ddgst:-false} 00:31:42.597 }, 00:31:42.597 "method": "bdev_nvme_attach_controller" 00:31:42.597 } 00:31:42.597 EOF 00:31:42.597 )") 00:31:42.597 18:21:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:42.597 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:42.597 18:21:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:42.597 18:21:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:42.597 18:21:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:42.597 18:21:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:42.597 "params": { 00:31:42.597 "name": "Nvme0", 00:31:42.597 "trtype": "tcp", 00:31:42.597 "traddr": "10.0.0.2", 00:31:42.597 "adrfam": "ipv4", 00:31:42.597 "trsvcid": "4420", 00:31:42.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:42.597 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:42.597 "hdgst": false, 00:31:42.597 "ddgst": false 00:31:42.597 }, 00:31:42.597 "method": "bdev_nvme_attach_controller" 00:31:42.597 },{ 00:31:42.597 "params": { 00:31:42.597 "name": "Nvme1", 00:31:42.597 "trtype": "tcp", 00:31:42.597 "traddr": "10.0.0.2", 00:31:42.597 "adrfam": "ipv4", 00:31:42.597 "trsvcid": "4420", 00:31:42.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:42.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:42.597 "hdgst": false, 00:31:42.597 "ddgst": false 00:31:42.597 }, 00:31:42.597 "method": "bdev_nvme_attach_controller" 00:31:42.597 }' 00:31:42.597 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:42.597 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:42.597 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:42.597 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.597 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:42.597 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:42.597 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:42.597 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:42.597 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:42.597 18:21:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.597 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:42.597 ... 00:31:42.597 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:42.597 ... 00:31:42.597 fio-3.35 00:31:42.597 Starting 4 threads 00:31:49.159 00:31:49.159 filename0: (groupid=0, jobs=1): err= 0: pid=1649093: Mon Dec 9 18:21:11 2024 00:31:49.159 read: IOPS=1932, BW=15.1MiB/s (15.8MB/s)(75.5MiB/5001msec) 00:31:49.159 slat (nsec): min=4288, max=57766, avg=13367.11, stdev=4699.73 00:31:49.159 clat (usec): min=867, max=7521, avg=4091.53, stdev=547.33 00:31:49.159 lat (usec): min=882, max=7542, avg=4104.90, stdev=547.32 00:31:49.159 clat percentiles (usec): 00:31:49.159 | 1.00th=[ 2376], 5.00th=[ 3425], 10.00th=[ 3654], 20.00th=[ 3851], 00:31:49.159 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4113], 00:31:49.159 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4490], 95.00th=[ 4883], 00:31:49.159 | 99.00th=[ 6390], 99.50th=[ 6980], 99.90th=[ 7373], 99.95th=[ 7504], 00:31:49.159 | 99.99th=[ 7504] 00:31:49.159 bw ( KiB/s): min=14893, max=15936, per=25.15%, avg=15457.30, stdev=313.27, samples=10 00:31:49.159 iops : min= 1861, max= 1992, avg=1932.10, stdev=39.28, samples=10 00:31:49.159 lat (usec) : 1000=0.07% 00:31:49.159 lat (msec) : 2=0.50%, 4=31.59%, 10=67.84% 00:31:49.159 cpu : usr=89.60%, sys=7.30%, ctx=320, majf=0, minf=9 00:31:49.159 IO depths : 1=0.5%, 2=16.2%, 4=56.7%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.159 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.159 issued rwts: total=9664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.159 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:49.159 filename0: (groupid=0, jobs=1): err= 0: pid=1649094: Mon Dec 9 18:21:11 2024 00:31:49.159 read: IOPS=1921, BW=15.0MiB/s (15.7MB/s)(75.1MiB/5002msec) 00:31:49.159 slat (nsec): min=3947, max=36746, avg=13591.16, stdev=3843.06 00:31:49.159 clat (usec): min=815, max=7588, avg=4111.09, stdev=600.08 00:31:49.159 lat (usec): min=828, max=7604, avg=4124.68, stdev=599.98 00:31:49.159 clat percentiles (usec): 00:31:49.159 | 1.00th=[ 2409], 5.00th=[ 3425], 10.00th=[ 3654], 20.00th=[ 3851], 00:31:49.159 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4113], 00:31:49.159 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4490], 95.00th=[ 5145], 00:31:49.159 | 99.00th=[ 6652], 99.50th=[ 6980], 99.90th=[ 7439], 99.95th=[ 7504], 00:31:49.159 | 99.99th=[ 7570] 00:31:49.159 bw ( KiB/s): min=14864, max=16016, per=25.01%, avg=15371.00, stdev=340.65, samples=10 00:31:49.159 iops : min= 1858, max= 2002, avg=1921.30, stdev=42.60, samples=10 00:31:49.159 lat (usec) : 1000=0.08% 00:31:49.159 lat (msec) : 2=0.49%, 4=28.56%, 10=70.87% 00:31:49.159 cpu : usr=94.74%, sys=4.74%, ctx=8, majf=0, minf=9 00:31:49.159 IO depths : 1=0.3%, 2=18.2%, 4=54.5%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.159 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.159 issued rwts: total=9612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.159 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:49.159 filename1: (groupid=0, jobs=1): err= 0: pid=1649095: Mon Dec 9 18:21:11 2024 00:31:49.159 read: IOPS=1881, BW=14.7MiB/s (15.4MB/s)(73.5MiB/5003msec) 00:31:49.159 slat (nsec): min=4092, max=58749, avg=13768.19, stdev=4474.59 00:31:49.159 clat (usec): min=639, max=7432, avg=4199.92, stdev=669.14 00:31:49.159 lat (usec): min=652, max=7449, avg=4213.69, stdev=668.90 00:31:49.159 clat percentiles (usec): 00:31:49.159 | 1.00th=[ 2376], 5.00th=[ 3523], 10.00th=[ 3720], 20.00th=[ 3982], 00:31:49.159 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4113], 60.00th=[ 4146], 00:31:49.159 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4883], 95.00th=[ 5538], 00:31:49.159 | 99.00th=[ 6783], 99.50th=[ 7046], 99.90th=[ 7373], 99.95th=[ 7439], 00:31:49.159 | 99.99th=[ 7439] 00:31:49.159 bw ( KiB/s): min=13856, max=15552, per=24.48%, avg=15046.40, stdev=578.92, samples=10 00:31:49.159 iops : min= 1732, max= 1944, avg=1880.80, stdev=72.36, samples=10 00:31:49.159 lat (usec) : 750=0.02%, 1000=0.15% 00:31:49.159 lat (msec) : 2=0.71%, 4=23.16%, 10=75.96% 00:31:49.159 cpu : usr=90.62%, sys=6.64%, ctx=158, majf=0, minf=0 00:31:49.159 IO depths : 1=0.3%, 2=17.6%, 4=55.2%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.159 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.159 issued rwts: total=9412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.159 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:49.159 filename1: (groupid=0, jobs=1): err= 0: pid=1649096: Mon Dec 9 18:21:11 2024 00:31:49.159 read: IOPS=1950, BW=15.2MiB/s (16.0MB/s)(76.2MiB/5002msec) 00:31:49.159 slat (nsec): min=4054, max=38895, avg=12864.52, stdev=3884.89 00:31:49.159 clat (usec): min=886, max=8149, avg=4055.32, stdev=558.64 00:31:49.159 lat (usec): min=900, max=8160, avg=4068.19, stdev=558.68 00:31:49.159 clat percentiles (usec): 00:31:49.159 | 1.00th=[ 2409], 5.00th=[ 3359], 10.00th=[ 3523], 20.00th=[ 3785], 00:31:49.159 | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4113], 00:31:49.159 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4752], 00:31:49.159 | 99.00th=[ 6652], 99.50th=[ 7046], 99.90th=[ 7504], 99.95th=[ 7570], 00:31:49.159 | 99.99th=[ 8160] 00:31:49.159 bw ( KiB/s): min=14880, max=16304, per=25.37%, avg=15598.40, stdev=439.57, samples=10 00:31:49.159 iops : min= 1860, max= 2038, avg=1949.80, stdev=54.95, samples=10 00:31:49.159 lat (usec) : 1000=0.03% 00:31:49.159 lat (msec) : 2=0.39%, 4=34.54%, 10=65.04% 00:31:49.159 cpu : usr=94.34%, sys=4.96%, ctx=60, majf=0, minf=0 00:31:49.159 IO depths : 1=0.3%, 2=17.9%, 4=54.9%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.159 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.159 issued rwts: total=9754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.159 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:49.159 00:31:49.159 Run status group 0 (all jobs): 00:31:49.159 READ: bw=60.0MiB/s (62.9MB/s), 14.7MiB/s-15.2MiB/s (15.4MB/s-16.0MB/s), io=300MiB (315MB), run=5001-5003msec 00:31:49.159 18:21:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:49.159 18:21:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:49.159 18:21:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:49.159 18:21:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:49.159 18:21:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:49.159 18:21:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:49.159 18:21:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.159 18:21:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.159 18:21:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.159 18:21:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:49.159 18:21:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.160 18:21:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.160 18:21:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.160 18:21:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:49.160 18:21:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:49.160 18:21:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:49.160 18:21:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:49.160 18:21:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.160 18:21:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.160 18:21:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.160 18:21:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:49.160 18:21:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.160 18:21:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.160 18:21:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.160 00:31:49.160 real 0m24.951s 00:31:49.160 user 4m35.074s 00:31:49.160 sys 0m5.792s 00:31:49.160 18:21:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:49.160 18:21:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.160 ************************************ 00:31:49.160 END TEST fio_dif_rand_params 00:31:49.160 ************************************ 00:31:49.160 18:21:11 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:49.160 18:21:11 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:49.160 18:21:11 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:49.160 18:21:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:49.160 ************************************ 00:31:49.160 START TEST fio_dif_digest 00:31:49.160 ************************************ 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:49.160 bdev_null0 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:49.160 [2024-12-09 18:21:11.474382] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:49.160 { 00:31:49.160 "params": { 00:31:49.160 "name": "Nvme$subsystem", 00:31:49.160 "trtype": "$TEST_TRANSPORT", 00:31:49.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:49.160 "adrfam": "ipv4", 00:31:49.160 "trsvcid": "$NVMF_PORT", 00:31:49.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:49.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:49.160 "hdgst": ${hdgst:-false}, 00:31:49.160 "ddgst": ${ddgst:-false} 00:31:49.160 }, 00:31:49.160 "method": "bdev_nvme_attach_controller" 00:31:49.160 } 00:31:49.160 EOF 00:31:49.160 )") 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:49.160 "params": { 00:31:49.160 "name": "Nvme0", 00:31:49.160 "trtype": "tcp", 00:31:49.160 "traddr": "10.0.0.2", 00:31:49.160 "adrfam": "ipv4", 00:31:49.160 "trsvcid": "4420", 00:31:49.160 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:49.160 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:49.160 "hdgst": true, 00:31:49.160 "ddgst": true 00:31:49.160 }, 00:31:49.160 "method": "bdev_nvme_attach_controller" 00:31:49.160 }' 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:49.160 18:21:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:49.160 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:49.160 ... 00:31:49.160 fio-3.35 00:31:49.160 Starting 3 threads 00:32:01.424 00:32:01.424 filename0: (groupid=0, jobs=1): err= 0: pid=1650473: Mon Dec 9 18:21:22 2024 00:32:01.424 read: IOPS=215, BW=27.0MiB/s (28.3MB/s)(271MiB/10045msec) 00:32:01.424 slat (nsec): min=4483, max=47776, avg=15762.92, stdev=2692.63 00:32:01.424 clat (usec): min=8852, max=50307, avg=13867.69, stdev=1469.46 00:32:01.424 lat (usec): min=8861, max=50326, avg=13883.45, stdev=1469.57 00:32:01.424 clat percentiles (usec): 00:32:01.424 | 1.00th=[11469], 5.00th=[12256], 10.00th=[12649], 20.00th=[13042], 00:32:01.424 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13829], 60.00th=[14091], 00:32:01.424 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15008], 95.00th=[15270], 00:32:01.424 | 99.00th=[16057], 99.50th=[16581], 99.90th=[18220], 99.95th=[50070], 00:32:01.424 | 99.99th=[50070] 00:32:01.424 bw ( KiB/s): min=26880, max=28672, per=35.51%, avg=27699.20, stdev=451.88, samples=20 00:32:01.424 iops : min= 210, max= 224, avg=216.40, stdev= 3.53, samples=20 00:32:01.424 lat (msec) : 10=0.55%, 20=99.35%, 50=0.05%, 100=0.05% 00:32:01.424 cpu : usr=93.23%, sys=5.98%, ctx=181, majf=0, minf=180 00:32:01.424 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:01.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.424 issued rwts: total=2167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.424 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:01.424 filename0: (groupid=0, jobs=1): err= 0: pid=1650474: Mon Dec 9 18:21:22 2024 00:32:01.424 read: IOPS=201, BW=25.2MiB/s (26.4MB/s)(253MiB/10047msec) 00:32:01.424 slat (nsec): min=4548, max=65354, avg=15884.73, stdev=2690.86 00:32:01.424 clat (usec): min=11512, max=59837, avg=14857.61, stdev=2211.52 00:32:01.424 lat (usec): min=11526, max=59848, avg=14873.49, stdev=2211.39 00:32:01.424 clat percentiles (usec): 00:32:01.424 | 1.00th=[12649], 5.00th=[13304], 10.00th=[13698], 20.00th=[14091], 00:32:01.424 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14746], 60.00th=[15008], 00:32:01.424 | 70.00th=[15139], 80.00th=[15533], 90.00th=[15926], 95.00th=[16319], 00:32:01.424 | 99.00th=[16909], 99.50th=[17171], 99.90th=[60031], 99.95th=[60031], 00:32:01.424 | 99.99th=[60031] 00:32:01.424 bw ( KiB/s): min=23552, max=26880, per=33.16%, avg=25868.80, stdev=666.92, samples=20 00:32:01.424 iops : min= 184, max= 210, avg=202.10, stdev= 5.21, samples=20 00:32:01.424 lat (msec) : 20=99.75%, 50=0.10%, 100=0.15% 00:32:01.424 cpu : usr=93.91%, sys=5.53%, ctx=22, majf=0, minf=157 00:32:01.424 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:01.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.424 issued rwts: total=2023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.424 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:01.424 filename0: (groupid=0, jobs=1): err= 0: pid=1650475: Mon Dec 9 18:21:22 2024 00:32:01.424 read: IOPS=192, BW=24.1MiB/s (25.2MB/s)(242MiB/10045msec) 00:32:01.424 slat (nsec): min=4429, max=45486, avg=14587.15, stdev=1901.14 00:32:01.424 clat (usec): min=9803, max=54914, avg=15548.55, stdev=1587.80 00:32:01.424 lat (usec): min=9813, max=54933, avg=15563.14, stdev=1587.97 00:32:01.424 clat percentiles (usec): 00:32:01.424 | 1.00th=[12649], 5.00th=[13829], 10.00th=[14353], 20.00th=[14746], 00:32:01.424 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15533], 60.00th=[15664], 00:32:01.424 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16909], 95.00th=[17433], 00:32:01.424 | 99.00th=[18220], 99.50th=[18744], 99.90th=[48497], 99.95th=[54789], 00:32:01.424 | 99.99th=[54789] 00:32:01.424 bw ( KiB/s): min=24320, max=25344, per=31.69%, avg=24719.20, stdev=301.77, samples=20 00:32:01.424 iops : min= 190, max= 198, avg=193.10, stdev= 2.38, samples=20 00:32:01.424 lat (msec) : 10=0.21%, 20=99.53%, 50=0.21%, 100=0.05% 00:32:01.424 cpu : usr=94.34%, sys=5.17%, ctx=17, majf=0, minf=60 00:32:01.424 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:01.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.424 issued rwts: total=1933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.424 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:01.424 00:32:01.424 Run status group 0 (all jobs): 00:32:01.424 READ: bw=76.2MiB/s (79.9MB/s), 24.1MiB/s-27.0MiB/s (25.2MB/s-28.3MB/s), io=765MiB (803MB), run=10045-10047msec 00:32:01.424 18:21:22 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:01.424 18:21:22 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:01.424 18:21:22 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:01.424 18:21:22 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:01.424 18:21:22 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:01.424 18:21:22 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:01.424 18:21:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.424 18:21:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:01.424 18:21:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.424 18:21:22 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:01.424 18:21:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.424 18:21:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:01.424 18:21:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.424 00:32:01.424 real 0m11.242s 00:32:01.424 user 0m29.490s 00:32:01.424 sys 0m1.986s 00:32:01.424 18:21:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:01.424 18:21:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:01.424 ************************************ 00:32:01.424 END TEST fio_dif_digest 00:32:01.424 ************************************ 00:32:01.424 18:21:22 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:01.424 18:21:22 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:01.424 18:21:22 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:01.424 18:21:22 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:32:01.424 18:21:22 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:01.424 18:21:22 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:32:01.424 18:21:22 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:01.424 18:21:22 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:01.424 rmmod nvme_tcp 00:32:01.424 rmmod nvme_fabrics 00:32:01.424 rmmod nvme_keyring 00:32:01.424 18:21:22 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:01.424 18:21:22 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:32:01.424 18:21:22 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:32:01.424 18:21:22 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1643676 ']' 00:32:01.424 18:21:22 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1643676 00:32:01.424 18:21:22 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1643676 ']' 00:32:01.424 18:21:22 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1643676 00:32:01.424 18:21:22 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:32:01.424 18:21:22 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:01.424 18:21:22 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1643676 00:32:01.424 18:21:22 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:01.424 18:21:22 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:01.424 18:21:22 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1643676' 00:32:01.424 killing process with pid 1643676 00:32:01.424 18:21:22 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1643676 00:32:01.424 18:21:22 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1643676 00:32:01.424 18:21:23 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:32:01.424 18:21:23 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:01.424 Waiting for block devices as requested 00:32:01.424 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:01.424 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:01.424 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:01.683 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:01.683 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:01.683 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:01.941 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:01.942 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:01.942 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:01.942 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:02.205 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:02.205 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:02.205 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:02.205 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:02.465 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:02.465 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:02.465 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:02.465 18:21:25 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:02.465 18:21:25 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:02.465 18:21:25 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:32:02.465 18:21:25 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:32:02.465 18:21:25 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:02.465 18:21:25 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:32:02.465 18:21:25 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:02.465 18:21:25 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:02.465 18:21:25 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.465 18:21:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:02.465 18:21:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.006 18:21:27 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:05.006 00:32:05.006 real 1m8.282s 00:32:05.006 user 6m33.996s 00:32:05.006 sys 0m16.979s 00:32:05.006 18:21:27 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:05.006 18:21:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:05.006 ************************************ 00:32:05.006 END TEST nvmf_dif 00:32:05.006 ************************************ 00:32:05.006 18:21:27 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:05.006 18:21:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:05.006 18:21:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:05.006 18:21:27 -- common/autotest_common.sh@10 -- # set +x 00:32:05.006 ************************************ 00:32:05.006 START TEST nvmf_abort_qd_sizes 00:32:05.006 ************************************ 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:05.006 * Looking for test storage... 00:32:05.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:05.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.006 --rc genhtml_branch_coverage=1 00:32:05.006 --rc genhtml_function_coverage=1 00:32:05.006 --rc genhtml_legend=1 00:32:05.006 --rc geninfo_all_blocks=1 00:32:05.006 --rc geninfo_unexecuted_blocks=1 00:32:05.006 00:32:05.006 ' 00:32:05.006 18:21:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:05.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.006 --rc genhtml_branch_coverage=1 00:32:05.006 --rc genhtml_function_coverage=1 00:32:05.006 --rc genhtml_legend=1 00:32:05.006 --rc geninfo_all_blocks=1 00:32:05.006 --rc geninfo_unexecuted_blocks=1 00:32:05.007 00:32:05.007 ' 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:05.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.007 --rc genhtml_branch_coverage=1 00:32:05.007 --rc genhtml_function_coverage=1 00:32:05.007 --rc genhtml_legend=1 00:32:05.007 --rc geninfo_all_blocks=1 00:32:05.007 --rc geninfo_unexecuted_blocks=1 00:32:05.007 00:32:05.007 ' 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:05.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.007 --rc genhtml_branch_coverage=1 00:32:05.007 --rc genhtml_function_coverage=1 00:32:05.007 --rc genhtml_legend=1 00:32:05.007 --rc geninfo_all_blocks=1 00:32:05.007 --rc geninfo_unexecuted_blocks=1 00:32:05.007 00:32:05.007 ' 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:05.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:32:05.007 18:21:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:06.909 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:06.909 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:32:06.909 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:06.909 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:06.909 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:06.909 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:06.909 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:06.909 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:32:06.909 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:06.910 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:06.910 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:06.910 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:06.910 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:06.910 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:07.170 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:07.170 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:07.170 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:07.170 18:21:29 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:07.170 18:21:30 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:07.170 18:21:30 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:07.170 18:21:30 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:07.170 18:21:30 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:07.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:07.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:32:07.170 00:32:07.170 --- 10.0.0.2 ping statistics --- 00:32:07.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.170 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:32:07.170 18:21:30 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:07.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:07.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:32:07.170 00:32:07.170 --- 10.0.0.1 ping statistics --- 00:32:07.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.170 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:32:07.170 18:21:30 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:07.170 18:21:30 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:32:07.170 18:21:30 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:32:07.170 18:21:30 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:08.545 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:08.545 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:08.545 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:08.545 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:08.545 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:08.545 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:08.545 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:08.545 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:08.545 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:08.545 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:08.545 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:08.545 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:08.545 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:08.545 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:08.545 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:08.545 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:09.484 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:09.484 18:21:32 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:09.484 18:21:32 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:09.484 18:21:32 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:09.484 18:21:32 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:09.484 18:21:32 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:09.484 18:21:32 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:09.484 18:21:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:09.484 18:21:32 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:09.484 18:21:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:09.484 18:21:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:09.484 18:21:32 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1655392 00:32:09.484 18:21:32 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:09.484 18:21:32 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1655392 00:32:09.484 18:21:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1655392 ']' 00:32:09.484 18:21:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.484 18:21:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:09.484 18:21:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.484 18:21:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:09.484 18:21:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:09.743 [2024-12-09 18:21:32.559455] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:32:09.743 [2024-12-09 18:21:32.559552] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.743 [2024-12-09 18:21:32.629932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:09.743 [2024-12-09 18:21:32.688747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.743 [2024-12-09 18:21:32.688804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.743 [2024-12-09 18:21:32.688818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:09.743 [2024-12-09 18:21:32.688830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:09.743 [2024-12-09 18:21:32.688850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.743 [2024-12-09 18:21:32.690320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.743 [2024-12-09 18:21:32.690428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:09.743 [2024-12-09 18:21:32.690502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:09.743 [2024-12-09 18:21:32.690505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.001 18:21:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:10.001 18:21:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:32:10.001 18:21:32 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:10.001 18:21:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:10.001 18:21:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:10.001 18:21:32 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:10.001 18:21:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:10.001 18:21:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:10.001 18:21:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:10.001 18:21:32 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:32:10.001 18:21:32 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:32:10.001 18:21:32 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:32:10.001 18:21:32 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:10.001 18:21:32 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:10.002 18:21:32 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:32:10.002 18:21:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:32:10.002 18:21:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:10.002 18:21:32 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:10.002 18:21:32 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:32:10.002 18:21:32 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:32:10.002 18:21:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:10.002 18:21:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:32:10.002 18:21:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:10.002 18:21:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:10.002 18:21:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:10.002 18:21:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:10.002 ************************************ 00:32:10.002 START TEST spdk_target_abort 00:32:10.002 ************************************ 00:32:10.002 18:21:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:32:10.002 18:21:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:10.002 18:21:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:32:10.002 18:21:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.002 18:21:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:13.288 spdk_targetn1 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:13.288 [2024-12-09 18:21:35.709820] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:13.288 [2024-12-09 18:21:35.758187] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:13.288 18:21:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:16.575 Initializing NVMe Controllers 00:32:16.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:16.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:16.575 Initialization complete. Launching workers. 00:32:16.575 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12235, failed: 0 00:32:16.575 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1304, failed to submit 10931 00:32:16.575 success 751, unsuccessful 553, failed 0 00:32:16.575 18:21:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:16.575 18:21:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:19.860 Initializing NVMe Controllers 00:32:19.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:19.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:19.860 Initialization complete. Launching workers. 00:32:19.860 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8658, failed: 0 00:32:19.860 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1242, failed to submit 7416 00:32:19.860 success 315, unsuccessful 927, failed 0 00:32:19.860 18:21:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:19.860 18:21:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:23.143 Initializing NVMe Controllers 00:32:23.143 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:23.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:23.143 Initialization complete. Launching workers. 00:32:23.143 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31326, failed: 0 00:32:23.143 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2647, failed to submit 28679 00:32:23.143 success 537, unsuccessful 2110, failed 0 00:32:23.143 18:21:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:23.143 18:21:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.143 18:21:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:23.143 18:21:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.143 18:21:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:23.143 18:21:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.143 18:21:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:24.080 18:21:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.080 18:21:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1655392 00:32:24.080 18:21:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1655392 ']' 00:32:24.080 18:21:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1655392 00:32:24.080 18:21:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:32:24.080 18:21:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:24.080 18:21:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1655392 00:32:24.080 18:21:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:24.080 18:21:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:24.080 18:21:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1655392' 00:32:24.080 killing process with pid 1655392 00:32:24.080 18:21:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1655392 00:32:24.080 18:21:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1655392 00:32:24.338 00:32:24.338 real 0m14.340s 00:32:24.338 user 0m54.155s 00:32:24.338 sys 0m2.748s 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:24.338 ************************************ 00:32:24.338 END TEST spdk_target_abort 00:32:24.338 ************************************ 00:32:24.338 18:21:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:24.338 18:21:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:24.338 18:21:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:24.338 18:21:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:24.338 ************************************ 00:32:24.338 START TEST kernel_target_abort 00:32:24.338 ************************************ 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:32:24.338 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:24.339 18:21:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:25.716 Waiting for block devices as requested 00:32:25.716 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:25.716 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:25.716 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:25.716 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:25.975 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:25.975 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:25.975 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:26.234 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:26.234 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:26.234 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:26.234 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:26.493 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:26.493 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:26.493 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:26.751 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:26.751 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:26.751 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:26.751 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:32:26.751 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:26.751 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:32:26.751 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:32:26.751 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:26.751 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:32:26.751 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:32:26.751 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:32:26.751 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:27.010 No valid GPT data, bailing 00:32:27.010 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:27.010 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:32:27.010 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:32:27.010 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:32:27.010 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:32:27.010 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:27.010 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:27.010 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:27.010 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:27.010 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:32:27.010 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:32:27.010 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:32:27.010 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:32:27.010 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:32:27.010 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:32:27.010 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:32:27.010 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:27.010 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:27.010 00:32:27.010 Discovery Log Number of Records 2, Generation counter 2 00:32:27.010 =====Discovery Log Entry 0====== 00:32:27.010 trtype: tcp 00:32:27.010 adrfam: ipv4 00:32:27.010 subtype: current discovery subsystem 00:32:27.010 treq: not specified, sq flow control disable supported 00:32:27.010 portid: 1 00:32:27.010 trsvcid: 4420 00:32:27.010 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:27.010 traddr: 10.0.0.1 00:32:27.010 eflags: none 00:32:27.010 sectype: none 00:32:27.010 =====Discovery Log Entry 1====== 00:32:27.010 trtype: tcp 00:32:27.010 adrfam: ipv4 00:32:27.010 subtype: nvme subsystem 00:32:27.010 treq: not specified, sq flow control disable supported 00:32:27.010 portid: 1 00:32:27.010 trsvcid: 4420 00:32:27.010 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:27.010 traddr: 10.0.0.1 00:32:27.010 eflags: none 00:32:27.010 sectype: none 00:32:27.010 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:27.010 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:27.010 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:27.010 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:27.011 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:27.011 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:27.011 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:27.011 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:27.011 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:27.011 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:27.011 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:27.011 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:27.011 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:27.011 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:27.011 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:27.011 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:27.011 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:27.011 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:27.011 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:27.011 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:27.011 18:21:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:30.302 Initializing NVMe Controllers 00:32:30.302 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:30.302 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:30.302 Initialization complete. Launching workers. 00:32:30.302 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56122, failed: 0 00:32:30.302 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 56122, failed to submit 0 00:32:30.302 success 0, unsuccessful 56122, failed 0 00:32:30.302 18:21:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:30.302 18:21:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:33.592 Initializing NVMe Controllers 00:32:33.592 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:33.592 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:33.592 Initialization complete. Launching workers. 00:32:33.592 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101454, failed: 0 00:32:33.592 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25562, failed to submit 75892 00:32:33.592 success 0, unsuccessful 25562, failed 0 00:32:33.592 18:21:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:33.592 18:21:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:36.937 Initializing NVMe Controllers 00:32:36.937 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:36.937 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:36.937 Initialization complete. Launching workers. 00:32:36.937 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 97547, failed: 0 00:32:36.937 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24362, failed to submit 73185 00:32:36.937 success 0, unsuccessful 24362, failed 0 00:32:36.937 18:21:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:36.937 18:21:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:36.937 18:21:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:32:36.937 18:21:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:36.937 18:21:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:36.937 18:21:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:36.937 18:21:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:36.937 18:21:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:32:36.937 18:21:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:32:36.937 18:21:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:37.504 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:37.504 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:37.762 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:37.762 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:37.762 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:37.762 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:37.762 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:37.762 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:37.762 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:37.762 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:37.762 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:37.762 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:37.762 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:37.762 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:37.762 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:37.762 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:38.700 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:38.959 00:32:38.959 real 0m14.484s 00:32:38.959 user 0m6.688s 00:32:38.959 sys 0m3.247s 00:32:38.959 18:22:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:38.959 18:22:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:38.959 ************************************ 00:32:38.959 END TEST kernel_target_abort 00:32:38.959 ************************************ 00:32:38.959 18:22:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:38.959 18:22:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:38.959 18:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:38.959 18:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:32:38.959 18:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:38.959 18:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:32:38.959 18:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:38.959 18:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:38.959 rmmod nvme_tcp 00:32:38.959 rmmod nvme_fabrics 00:32:38.959 rmmod nvme_keyring 00:32:38.959 18:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:38.959 18:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:32:38.959 18:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:32:38.959 18:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1655392 ']' 00:32:38.959 18:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1655392 00:32:38.959 18:22:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1655392 ']' 00:32:38.959 18:22:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1655392 00:32:38.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1655392) - No such process 00:32:38.959 18:22:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1655392 is not found' 00:32:38.959 Process with pid 1655392 is not found 00:32:38.960 18:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:32:38.960 18:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:39.896 Waiting for block devices as requested 00:32:39.896 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:40.154 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:40.154 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:40.412 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:40.412 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:40.412 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:40.412 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:40.672 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:40.672 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:40.672 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:40.672 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:40.932 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:40.932 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:40.932 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:41.190 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:41.190 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:41.190 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:41.450 18:22:04 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:41.450 18:22:04 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:41.450 18:22:04 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:32:41.450 18:22:04 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:32:41.450 18:22:04 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:41.450 18:22:04 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:32:41.450 18:22:04 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:41.450 18:22:04 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:41.450 18:22:04 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.450 18:22:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:41.450 18:22:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:43.356 18:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:43.356 00:32:43.356 real 0m38.725s 00:32:43.356 user 1m3.092s 00:32:43.356 sys 0m9.655s 00:32:43.356 18:22:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:43.356 18:22:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:43.356 ************************************ 00:32:43.356 END TEST nvmf_abort_qd_sizes 00:32:43.356 ************************************ 00:32:43.356 18:22:06 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:43.356 18:22:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:43.356 18:22:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:43.356 18:22:06 -- common/autotest_common.sh@10 -- # set +x 00:32:43.356 ************************************ 00:32:43.356 START TEST keyring_file 00:32:43.356 ************************************ 00:32:43.356 18:22:06 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:43.616 * Looking for test storage... 00:32:43.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:43.616 18:22:06 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:43.616 18:22:06 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:32:43.616 18:22:06 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:43.616 18:22:06 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:43.616 18:22:06 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:43.616 18:22:06 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:43.616 18:22:06 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:43.616 18:22:06 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@345 -- # : 1 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@353 -- # local d=1 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@355 -- # echo 1 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@353 -- # local d=2 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@355 -- # echo 2 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@368 -- # return 0 00:32:43.617 18:22:06 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:43.617 18:22:06 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:43.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.617 --rc genhtml_branch_coverage=1 00:32:43.617 --rc genhtml_function_coverage=1 00:32:43.617 --rc genhtml_legend=1 00:32:43.617 --rc geninfo_all_blocks=1 00:32:43.617 --rc geninfo_unexecuted_blocks=1 00:32:43.617 00:32:43.617 ' 00:32:43.617 18:22:06 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:43.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.617 --rc genhtml_branch_coverage=1 00:32:43.617 --rc genhtml_function_coverage=1 00:32:43.617 --rc genhtml_legend=1 00:32:43.617 --rc geninfo_all_blocks=1 00:32:43.617 --rc geninfo_unexecuted_blocks=1 00:32:43.617 00:32:43.617 ' 00:32:43.617 18:22:06 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:43.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.617 --rc genhtml_branch_coverage=1 00:32:43.617 --rc genhtml_function_coverage=1 00:32:43.617 --rc genhtml_legend=1 00:32:43.617 --rc geninfo_all_blocks=1 00:32:43.617 --rc geninfo_unexecuted_blocks=1 00:32:43.617 00:32:43.617 ' 00:32:43.617 18:22:06 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:43.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.617 --rc genhtml_branch_coverage=1 00:32:43.617 --rc genhtml_function_coverage=1 00:32:43.617 --rc genhtml_legend=1 00:32:43.617 --rc geninfo_all_blocks=1 00:32:43.617 --rc geninfo_unexecuted_blocks=1 00:32:43.617 00:32:43.617 ' 00:32:43.617 18:22:06 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:43.617 18:22:06 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:43.617 18:22:06 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:43.617 18:22:06 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.617 18:22:06 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.617 18:22:06 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.617 18:22:06 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:43.617 18:22:06 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@51 -- # : 0 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:43.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:43.617 18:22:06 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:43.617 18:22:06 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:43.617 18:22:06 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:43.617 18:22:06 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:43.617 18:22:06 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:43.617 18:22:06 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:43.617 18:22:06 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:43.617 18:22:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:43.617 18:22:06 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:43.617 18:22:06 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:43.617 18:22:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:43.617 18:22:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:43.617 18:22:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VnJPsJX8lE 00:32:43.617 18:22:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:43.617 18:22:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VnJPsJX8lE 00:32:43.617 18:22:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VnJPsJX8lE 00:32:43.617 18:22:06 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.VnJPsJX8lE 00:32:43.617 18:22:06 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:43.617 18:22:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:43.617 18:22:06 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:43.617 18:22:06 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:43.617 18:22:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:43.617 18:22:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:43.617 18:22:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.CTmaGPEMj8 00:32:43.617 18:22:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:32:43.617 18:22:06 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:43.618 18:22:06 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:43.618 18:22:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CTmaGPEMj8 00:32:43.618 18:22:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.CTmaGPEMj8 00:32:43.618 18:22:06 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.CTmaGPEMj8 00:32:43.618 18:22:06 keyring_file -- keyring/file.sh@30 -- # tgtpid=1661165 00:32:43.618 18:22:06 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:43.618 18:22:06 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1661165 00:32:43.618 18:22:06 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1661165 ']' 00:32:43.618 18:22:06 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.618 18:22:06 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:43.618 18:22:06 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.618 18:22:06 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:43.618 18:22:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:43.876 [2024-12-09 18:22:06.672562] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:32:43.876 [2024-12-09 18:22:06.672650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661165 ] 00:32:43.876 [2024-12-09 18:22:06.738684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.876 [2024-12-09 18:22:06.795671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:32:44.135 18:22:07 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:44.135 [2024-12-09 18:22:07.051985] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:44.135 null0 00:32:44.135 [2024-12-09 18:22:07.084060] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:44.135 [2024-12-09 18:22:07.084522] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.135 18:22:07 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:44.135 [2024-12-09 18:22:07.108101] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:44.135 request: 00:32:44.135 { 00:32:44.135 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:44.135 "secure_channel": false, 00:32:44.135 "listen_address": { 00:32:44.135 "trtype": "tcp", 00:32:44.135 "traddr": "127.0.0.1", 00:32:44.135 "trsvcid": "4420" 00:32:44.135 }, 00:32:44.135 "method": "nvmf_subsystem_add_listener", 00:32:44.135 "req_id": 1 00:32:44.135 } 00:32:44.135 Got JSON-RPC error response 00:32:44.135 response: 00:32:44.135 { 00:32:44.135 "code": -32602, 00:32:44.135 "message": "Invalid parameters" 00:32:44.135 } 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:44.135 18:22:07 keyring_file -- keyring/file.sh@47 -- # bperfpid=1661178 00:32:44.135 18:22:07 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:44.135 18:22:07 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1661178 /var/tmp/bperf.sock 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1661178 ']' 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:44.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:44.135 18:22:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:44.135 [2024-12-09 18:22:07.155830] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:32:44.135 [2024-12-09 18:22:07.155914] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661178 ] 00:32:44.394 [2024-12-09 18:22:07.220968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.394 [2024-12-09 18:22:07.279664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:44.394 18:22:07 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:44.394 18:22:07 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:32:44.394 18:22:07 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VnJPsJX8lE 00:32:44.394 18:22:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VnJPsJX8lE 00:32:44.651 18:22:07 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.CTmaGPEMj8 00:32:44.651 18:22:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.CTmaGPEMj8 00:32:44.909 18:22:07 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:32:44.909 18:22:07 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:44.909 18:22:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:44.909 18:22:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:44.909 18:22:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:45.167 18:22:08 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.VnJPsJX8lE == \/\t\m\p\/\t\m\p\.\V\n\J\P\s\J\X\8\l\E ]] 00:32:45.167 18:22:08 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:32:45.167 18:22:08 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:32:45.425 18:22:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:45.425 18:22:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:45.425 18:22:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:45.683 18:22:08 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.CTmaGPEMj8 == \/\t\m\p\/\t\m\p\.\C\T\m\a\G\P\E\M\j\8 ]] 00:32:45.683 18:22:08 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:32:45.683 18:22:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:45.683 18:22:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:45.683 18:22:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:45.683 18:22:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:45.683 18:22:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:45.940 18:22:08 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:45.940 18:22:08 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:32:45.940 18:22:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:45.940 18:22:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:45.940 18:22:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:45.940 18:22:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:45.940 18:22:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:46.198 18:22:09 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:32:46.198 18:22:09 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:46.198 18:22:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:46.455 [2024-12-09 18:22:09.300277] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:46.455 nvme0n1 00:32:46.455 18:22:09 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:32:46.455 18:22:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:46.456 18:22:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:46.456 18:22:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:46.456 18:22:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:46.456 18:22:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:46.714 18:22:09 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:32:46.714 18:22:09 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:32:46.714 18:22:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:46.714 18:22:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:46.714 18:22:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:46.714 18:22:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:46.714 18:22:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:46.971 18:22:09 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:32:46.971 18:22:09 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:47.230 Running I/O for 1 seconds... 00:32:48.163 9980.00 IOPS, 38.98 MiB/s 00:32:48.163 Latency(us) 00:32:48.163 [2024-12-09T17:22:11.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:48.163 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:48.163 nvme0n1 : 1.01 10033.56 39.19 0.00 0.00 12718.08 4102.07 18835.53 00:32:48.163 [2024-12-09T17:22:11.204Z] =================================================================================================================== 00:32:48.163 [2024-12-09T17:22:11.204Z] Total : 10033.56 39.19 0.00 0.00 12718.08 4102.07 18835.53 00:32:48.163 { 00:32:48.163 "results": [ 00:32:48.163 { 00:32:48.163 "job": "nvme0n1", 00:32:48.163 "core_mask": "0x2", 00:32:48.163 "workload": "randrw", 00:32:48.163 "percentage": 50, 00:32:48.163 "status": "finished", 00:32:48.163 "queue_depth": 128, 00:32:48.163 "io_size": 4096, 00:32:48.163 "runtime": 1.007419, 00:32:48.163 "iops": 10033.561010860427, 00:32:48.163 "mibps": 39.19359769867354, 00:32:48.163 "io_failed": 0, 00:32:48.163 "io_timeout": 0, 00:32:48.163 "avg_latency_us": 12718.079699834381, 00:32:48.163 "min_latency_us": 4102.068148148148, 00:32:48.163 "max_latency_us": 18835.53185185185 00:32:48.163 } 00:32:48.163 ], 00:32:48.163 "core_count": 1 00:32:48.163 } 00:32:48.163 18:22:11 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:48.163 18:22:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:48.421 18:22:11 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:32:48.421 18:22:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:48.421 18:22:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:48.421 18:22:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:48.421 18:22:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:48.421 18:22:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:48.679 18:22:11 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:48.679 18:22:11 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:32:48.679 18:22:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:48.679 18:22:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:48.679 18:22:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:48.679 18:22:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:48.679 18:22:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:48.936 18:22:11 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:32:48.936 18:22:11 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:48.936 18:22:11 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:48.936 18:22:11 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:48.936 18:22:11 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:48.936 18:22:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:48.936 18:22:11 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:48.936 18:22:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:48.937 18:22:11 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:48.937 18:22:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:49.194 [2024-12-09 18:22:12.173086] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:49.194 [2024-12-09 18:22:12.173238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x772170 (107): Transport endpoint is not connected 00:32:49.194 [2024-12-09 18:22:12.174231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x772170 (9): Bad file descriptor 00:32:49.194 [2024-12-09 18:22:12.175231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:32:49.195 [2024-12-09 18:22:12.175250] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:49.195 [2024-12-09 18:22:12.175264] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:32:49.195 [2024-12-09 18:22:12.175277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:32:49.195 request: 00:32:49.195 { 00:32:49.195 "name": "nvme0", 00:32:49.195 "trtype": "tcp", 00:32:49.195 "traddr": "127.0.0.1", 00:32:49.195 "adrfam": "ipv4", 00:32:49.195 "trsvcid": "4420", 00:32:49.195 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:49.195 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:49.195 "prchk_reftag": false, 00:32:49.195 "prchk_guard": false, 00:32:49.195 "hdgst": false, 00:32:49.195 "ddgst": false, 00:32:49.195 "psk": "key1", 00:32:49.195 "allow_unrecognized_csi": false, 00:32:49.195 "method": "bdev_nvme_attach_controller", 00:32:49.195 "req_id": 1 00:32:49.195 } 00:32:49.195 Got JSON-RPC error response 00:32:49.195 response: 00:32:49.195 { 00:32:49.195 "code": -5, 00:32:49.195 "message": "Input/output error" 00:32:49.195 } 00:32:49.195 18:22:12 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:49.195 18:22:12 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:49.195 18:22:12 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:49.195 18:22:12 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:49.195 18:22:12 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:32:49.195 18:22:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:49.195 18:22:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:49.195 18:22:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:49.195 18:22:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:49.195 18:22:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:49.453 18:22:12 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:49.453 18:22:12 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:32:49.453 18:22:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:49.453 18:22:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:49.453 18:22:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:49.453 18:22:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:49.453 18:22:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:49.712 18:22:12 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:32:49.712 18:22:12 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:32:49.712 18:22:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:50.278 18:22:13 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:32:50.278 18:22:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:50.278 18:22:13 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:32:50.278 18:22:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:50.278 18:22:13 keyring_file -- keyring/file.sh@78 -- # jq length 00:32:50.536 18:22:13 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:32:50.536 18:22:13 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.VnJPsJX8lE 00:32:50.536 18:22:13 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.VnJPsJX8lE 00:32:50.536 18:22:13 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:50.536 18:22:13 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.VnJPsJX8lE 00:32:50.536 18:22:13 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:50.536 18:22:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:50.536 18:22:13 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:50.536 18:22:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:50.536 18:22:13 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VnJPsJX8lE 00:32:50.536 18:22:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VnJPsJX8lE 00:32:51.103 [2024-12-09 18:22:13.837396] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.VnJPsJX8lE': 0100660 00:32:51.103 [2024-12-09 18:22:13.837433] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:51.103 request: 00:32:51.103 { 00:32:51.103 "name": "key0", 00:32:51.103 "path": "/tmp/tmp.VnJPsJX8lE", 00:32:51.103 "method": "keyring_file_add_key", 00:32:51.103 "req_id": 1 00:32:51.103 } 00:32:51.103 Got JSON-RPC error response 00:32:51.103 response: 00:32:51.103 { 00:32:51.103 "code": -1, 00:32:51.103 "message": "Operation not permitted" 00:32:51.103 } 00:32:51.103 18:22:13 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:51.103 18:22:13 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:51.103 18:22:13 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:51.103 18:22:13 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:51.103 18:22:13 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.VnJPsJX8lE 00:32:51.103 18:22:13 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VnJPsJX8lE 00:32:51.103 18:22:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VnJPsJX8lE 00:32:51.103 18:22:14 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.VnJPsJX8lE 00:32:51.103 18:22:14 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:32:51.103 18:22:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:51.103 18:22:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:51.103 18:22:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:51.103 18:22:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:51.103 18:22:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:51.672 18:22:14 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:32:51.672 18:22:14 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:51.672 18:22:14 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:51.672 18:22:14 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:51.672 18:22:14 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:51.672 18:22:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:51.672 18:22:14 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:51.672 18:22:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:51.672 18:22:14 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:51.672 18:22:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:51.672 [2024-12-09 18:22:14.655615] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.VnJPsJX8lE': No such file or directory 00:32:51.672 [2024-12-09 18:22:14.655646] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:51.672 [2024-12-09 18:22:14.655676] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:51.672 [2024-12-09 18:22:14.655689] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:32:51.672 [2024-12-09 18:22:14.655703] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:51.672 [2024-12-09 18:22:14.655713] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:51.672 request: 00:32:51.672 { 00:32:51.672 "name": "nvme0", 00:32:51.672 "trtype": "tcp", 00:32:51.672 "traddr": "127.0.0.1", 00:32:51.672 "adrfam": "ipv4", 00:32:51.672 "trsvcid": "4420", 00:32:51.672 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:51.672 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:51.672 "prchk_reftag": false, 00:32:51.672 "prchk_guard": false, 00:32:51.672 "hdgst": false, 00:32:51.672 "ddgst": false, 00:32:51.672 "psk": "key0", 00:32:51.672 "allow_unrecognized_csi": false, 00:32:51.672 "method": "bdev_nvme_attach_controller", 00:32:51.672 "req_id": 1 00:32:51.672 } 00:32:51.672 Got JSON-RPC error response 00:32:51.672 response: 00:32:51.672 { 00:32:51.672 "code": -19, 00:32:51.672 "message": "No such device" 00:32:51.672 } 00:32:51.672 18:22:14 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:51.672 18:22:14 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:51.672 18:22:14 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:51.672 18:22:14 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:51.672 18:22:14 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:32:51.672 18:22:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:51.931 18:22:14 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:51.931 18:22:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:51.931 18:22:14 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:51.931 18:22:14 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:51.931 18:22:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:51.931 18:22:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:51.931 18:22:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4cGBGqYHOm 00:32:51.931 18:22:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:51.931 18:22:14 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:51.931 18:22:14 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:51.931 18:22:14 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:51.931 18:22:14 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:51.931 18:22:14 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:51.931 18:22:14 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:52.190 18:22:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4cGBGqYHOm 00:32:52.190 18:22:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4cGBGqYHOm 00:32:52.190 18:22:14 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.4cGBGqYHOm 00:32:52.190 18:22:14 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4cGBGqYHOm 00:32:52.190 18:22:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4cGBGqYHOm 00:32:52.451 18:22:15 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:52.451 18:22:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:52.709 nvme0n1 00:32:52.709 18:22:15 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:32:52.709 18:22:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:52.709 18:22:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:52.709 18:22:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:52.709 18:22:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:52.709 18:22:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:52.966 18:22:15 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:32:52.966 18:22:15 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:32:52.966 18:22:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:53.223 18:22:16 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:32:53.223 18:22:16 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:32:53.223 18:22:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:53.223 18:22:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:53.223 18:22:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:53.481 18:22:16 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:32:53.481 18:22:16 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:32:53.481 18:22:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:53.481 18:22:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:53.481 18:22:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:53.481 18:22:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:53.481 18:22:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:53.738 18:22:16 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:32:53.738 18:22:16 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:53.738 18:22:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:53.996 18:22:16 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:32:53.996 18:22:16 keyring_file -- keyring/file.sh@105 -- # jq length 00:32:53.996 18:22:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:54.255 18:22:17 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:32:54.255 18:22:17 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4cGBGqYHOm 00:32:54.255 18:22:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4cGBGqYHOm 00:32:54.513 18:22:17 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.CTmaGPEMj8 00:32:54.513 18:22:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.CTmaGPEMj8 00:32:55.080 18:22:17 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:55.080 18:22:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:55.338 nvme0n1 00:32:55.338 18:22:18 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:32:55.338 18:22:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:55.599 18:22:18 keyring_file -- keyring/file.sh@113 -- # config='{ 00:32:55.599 "subsystems": [ 00:32:55.599 { 00:32:55.599 "subsystem": "keyring", 00:32:55.599 "config": [ 00:32:55.599 { 00:32:55.599 "method": "keyring_file_add_key", 00:32:55.599 "params": { 00:32:55.599 "name": "key0", 00:32:55.599 "path": "/tmp/tmp.4cGBGqYHOm" 00:32:55.599 } 00:32:55.599 }, 00:32:55.599 { 00:32:55.599 "method": "keyring_file_add_key", 00:32:55.599 "params": { 00:32:55.599 "name": "key1", 00:32:55.599 "path": "/tmp/tmp.CTmaGPEMj8" 00:32:55.599 } 00:32:55.599 } 00:32:55.599 ] 00:32:55.599 }, 00:32:55.599 { 00:32:55.599 "subsystem": "iobuf", 00:32:55.599 "config": [ 00:32:55.599 { 00:32:55.599 "method": "iobuf_set_options", 00:32:55.599 "params": { 00:32:55.599 "small_pool_count": 8192, 00:32:55.599 "large_pool_count": 1024, 00:32:55.599 "small_bufsize": 8192, 00:32:55.599 "large_bufsize": 135168, 00:32:55.599 "enable_numa": false 00:32:55.599 } 00:32:55.599 } 00:32:55.599 ] 00:32:55.599 }, 00:32:55.599 { 00:32:55.599 "subsystem": "sock", 00:32:55.599 "config": [ 00:32:55.599 { 00:32:55.599 "method": "sock_set_default_impl", 00:32:55.599 "params": { 00:32:55.599 "impl_name": "posix" 00:32:55.599 } 00:32:55.599 }, 00:32:55.599 { 00:32:55.599 "method": "sock_impl_set_options", 00:32:55.599 "params": { 00:32:55.599 "impl_name": "ssl", 00:32:55.599 "recv_buf_size": 4096, 00:32:55.599 "send_buf_size": 4096, 00:32:55.599 "enable_recv_pipe": true, 00:32:55.599 "enable_quickack": false, 00:32:55.599 "enable_placement_id": 0, 00:32:55.599 "enable_zerocopy_send_server": true, 00:32:55.599 "enable_zerocopy_send_client": false, 00:32:55.599 "zerocopy_threshold": 0, 00:32:55.599 "tls_version": 0, 00:32:55.599 "enable_ktls": false 00:32:55.599 } 00:32:55.599 }, 00:32:55.599 { 00:32:55.599 "method": "sock_impl_set_options", 00:32:55.599 "params": { 00:32:55.599 "impl_name": "posix", 00:32:55.599 "recv_buf_size": 2097152, 00:32:55.599 "send_buf_size": 2097152, 00:32:55.599 "enable_recv_pipe": true, 00:32:55.599 "enable_quickack": false, 00:32:55.599 "enable_placement_id": 0, 00:32:55.599 "enable_zerocopy_send_server": true, 00:32:55.599 "enable_zerocopy_send_client": false, 00:32:55.599 "zerocopy_threshold": 0, 00:32:55.599 "tls_version": 0, 00:32:55.599 "enable_ktls": false 00:32:55.599 } 00:32:55.599 } 00:32:55.599 ] 00:32:55.599 }, 00:32:55.599 { 00:32:55.599 "subsystem": "vmd", 00:32:55.599 "config": [] 00:32:55.599 }, 00:32:55.599 { 00:32:55.599 "subsystem": "accel", 00:32:55.599 "config": [ 00:32:55.599 { 00:32:55.599 "method": "accel_set_options", 00:32:55.599 "params": { 00:32:55.599 "small_cache_size": 128, 00:32:55.599 "large_cache_size": 16, 00:32:55.599 "task_count": 2048, 00:32:55.599 "sequence_count": 2048, 00:32:55.599 "buf_count": 2048 00:32:55.599 } 00:32:55.599 } 00:32:55.599 ] 00:32:55.599 }, 00:32:55.599 { 00:32:55.599 "subsystem": "bdev", 00:32:55.599 "config": [ 00:32:55.599 { 00:32:55.599 "method": "bdev_set_options", 00:32:55.599 "params": { 00:32:55.599 "bdev_io_pool_size": 65535, 00:32:55.599 "bdev_io_cache_size": 256, 00:32:55.599 "bdev_auto_examine": true, 00:32:55.599 "iobuf_small_cache_size": 128, 00:32:55.599 "iobuf_large_cache_size": 16 00:32:55.599 } 00:32:55.599 }, 00:32:55.599 { 00:32:55.599 "method": "bdev_raid_set_options", 00:32:55.599 "params": { 00:32:55.599 "process_window_size_kb": 1024, 00:32:55.599 "process_max_bandwidth_mb_sec": 0 00:32:55.599 } 00:32:55.599 }, 00:32:55.599 { 00:32:55.599 "method": "bdev_iscsi_set_options", 00:32:55.599 "params": { 00:32:55.599 "timeout_sec": 30 00:32:55.599 } 00:32:55.599 }, 00:32:55.599 { 00:32:55.599 "method": "bdev_nvme_set_options", 00:32:55.599 "params": { 00:32:55.599 "action_on_timeout": "none", 00:32:55.599 "timeout_us": 0, 00:32:55.599 "timeout_admin_us": 0, 00:32:55.599 "keep_alive_timeout_ms": 10000, 00:32:55.599 "arbitration_burst": 0, 00:32:55.599 "low_priority_weight": 0, 00:32:55.599 "medium_priority_weight": 0, 00:32:55.599 "high_priority_weight": 0, 00:32:55.599 "nvme_adminq_poll_period_us": 10000, 00:32:55.599 "nvme_ioq_poll_period_us": 0, 00:32:55.599 "io_queue_requests": 512, 00:32:55.599 "delay_cmd_submit": true, 00:32:55.599 "transport_retry_count": 4, 00:32:55.599 "bdev_retry_count": 3, 00:32:55.599 "transport_ack_timeout": 0, 00:32:55.599 "ctrlr_loss_timeout_sec": 0, 00:32:55.599 "reconnect_delay_sec": 0, 00:32:55.599 "fast_io_fail_timeout_sec": 0, 00:32:55.599 "disable_auto_failback": false, 00:32:55.599 "generate_uuids": false, 00:32:55.599 "transport_tos": 0, 00:32:55.599 "nvme_error_stat": false, 00:32:55.599 "rdma_srq_size": 0, 00:32:55.599 "io_path_stat": false, 00:32:55.599 "allow_accel_sequence": false, 00:32:55.599 "rdma_max_cq_size": 0, 00:32:55.599 "rdma_cm_event_timeout_ms": 0, 00:32:55.599 "dhchap_digests": [ 00:32:55.599 "sha256", 00:32:55.599 "sha384", 00:32:55.599 "sha512" 00:32:55.599 ], 00:32:55.599 "dhchap_dhgroups": [ 00:32:55.599 "null", 00:32:55.599 "ffdhe2048", 00:32:55.599 "ffdhe3072", 00:32:55.599 "ffdhe4096", 00:32:55.599 "ffdhe6144", 00:32:55.599 "ffdhe8192" 00:32:55.599 ] 00:32:55.599 } 00:32:55.599 }, 00:32:55.599 { 00:32:55.599 "method": "bdev_nvme_attach_controller", 00:32:55.599 "params": { 00:32:55.599 "name": "nvme0", 00:32:55.599 "trtype": "TCP", 00:32:55.599 "adrfam": "IPv4", 00:32:55.599 "traddr": "127.0.0.1", 00:32:55.599 "trsvcid": "4420", 00:32:55.599 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:55.599 "prchk_reftag": false, 00:32:55.599 "prchk_guard": false, 00:32:55.599 "ctrlr_loss_timeout_sec": 0, 00:32:55.599 "reconnect_delay_sec": 0, 00:32:55.599 "fast_io_fail_timeout_sec": 0, 00:32:55.599 "psk": "key0", 00:32:55.599 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:55.599 "hdgst": false, 00:32:55.599 "ddgst": false, 00:32:55.599 "multipath": "multipath" 00:32:55.599 } 00:32:55.599 }, 00:32:55.599 { 00:32:55.599 "method": "bdev_nvme_set_hotplug", 00:32:55.599 "params": { 00:32:55.599 "period_us": 100000, 00:32:55.599 "enable": false 00:32:55.600 } 00:32:55.600 }, 00:32:55.600 { 00:32:55.600 "method": "bdev_wait_for_examine" 00:32:55.600 } 00:32:55.600 ] 00:32:55.600 }, 00:32:55.600 { 00:32:55.600 "subsystem": "nbd", 00:32:55.600 "config": [] 00:32:55.600 } 00:32:55.600 ] 00:32:55.600 }' 00:32:55.600 18:22:18 keyring_file -- keyring/file.sh@115 -- # killprocess 1661178 00:32:55.600 18:22:18 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1661178 ']' 00:32:55.600 18:22:18 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1661178 00:32:55.600 18:22:18 keyring_file -- common/autotest_common.sh@959 -- # uname 00:32:55.600 18:22:18 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:55.600 18:22:18 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1661178 00:32:55.600 18:22:18 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:55.600 18:22:18 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:55.600 18:22:18 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1661178' 00:32:55.600 killing process with pid 1661178 00:32:55.600 18:22:18 keyring_file -- common/autotest_common.sh@973 -- # kill 1661178 00:32:55.600 Received shutdown signal, test time was about 1.000000 seconds 00:32:55.600 00:32:55.600 Latency(us) 00:32:55.600 [2024-12-09T17:22:18.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:55.600 [2024-12-09T17:22:18.641Z] =================================================================================================================== 00:32:55.600 [2024-12-09T17:22:18.641Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:55.600 18:22:18 keyring_file -- common/autotest_common.sh@978 -- # wait 1661178 00:32:55.860 18:22:18 keyring_file -- keyring/file.sh@118 -- # bperfpid=1662664 00:32:55.861 18:22:18 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1662664 /var/tmp/bperf.sock 00:32:55.861 18:22:18 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1662664 ']' 00:32:55.861 18:22:18 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:55.861 18:22:18 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:55.861 18:22:18 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:55.861 18:22:18 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:55.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:55.861 18:22:18 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:32:55.861 "subsystems": [ 00:32:55.861 { 00:32:55.861 "subsystem": "keyring", 00:32:55.861 "config": [ 00:32:55.861 { 00:32:55.861 "method": "keyring_file_add_key", 00:32:55.861 "params": { 00:32:55.861 "name": "key0", 00:32:55.861 "path": "/tmp/tmp.4cGBGqYHOm" 00:32:55.861 } 00:32:55.861 }, 00:32:55.861 { 00:32:55.861 "method": "keyring_file_add_key", 00:32:55.861 "params": { 00:32:55.861 "name": "key1", 00:32:55.861 "path": "/tmp/tmp.CTmaGPEMj8" 00:32:55.861 } 00:32:55.861 } 00:32:55.861 ] 00:32:55.861 }, 00:32:55.861 { 00:32:55.861 "subsystem": "iobuf", 00:32:55.861 "config": [ 00:32:55.861 { 00:32:55.861 "method": "iobuf_set_options", 00:32:55.861 "params": { 00:32:55.861 "small_pool_count": 8192, 00:32:55.861 "large_pool_count": 1024, 00:32:55.861 "small_bufsize": 8192, 00:32:55.861 "large_bufsize": 135168, 00:32:55.861 "enable_numa": false 00:32:55.861 } 00:32:55.861 } 00:32:55.861 ] 00:32:55.861 }, 00:32:55.861 { 00:32:55.861 "subsystem": "sock", 00:32:55.861 "config": [ 00:32:55.861 { 00:32:55.861 "method": "sock_set_default_impl", 00:32:55.861 "params": { 00:32:55.861 "impl_name": "posix" 00:32:55.861 } 00:32:55.861 }, 00:32:55.861 { 00:32:55.861 "method": "sock_impl_set_options", 00:32:55.861 "params": { 00:32:55.861 "impl_name": "ssl", 00:32:55.861 "recv_buf_size": 4096, 00:32:55.861 "send_buf_size": 4096, 00:32:55.861 "enable_recv_pipe": true, 00:32:55.861 "enable_quickack": false, 00:32:55.861 "enable_placement_id": 0, 00:32:55.861 "enable_zerocopy_send_server": true, 00:32:55.861 "enable_zerocopy_send_client": false, 00:32:55.861 "zerocopy_threshold": 0, 00:32:55.861 "tls_version": 0, 00:32:55.861 "enable_ktls": false 00:32:55.861 } 00:32:55.861 }, 00:32:55.861 { 00:32:55.861 "method": "sock_impl_set_options", 00:32:55.861 "params": { 00:32:55.861 "impl_name": "posix", 00:32:55.861 "recv_buf_size": 2097152, 00:32:55.861 "send_buf_size": 2097152, 00:32:55.861 "enable_recv_pipe": true, 00:32:55.861 "enable_quickack": false, 00:32:55.861 "enable_placement_id": 0, 00:32:55.861 "enable_zerocopy_send_server": true, 00:32:55.861 "enable_zerocopy_send_client": false, 00:32:55.861 "zerocopy_threshold": 0, 00:32:55.861 "tls_version": 0, 00:32:55.861 "enable_ktls": false 00:32:55.861 } 00:32:55.861 } 00:32:55.861 ] 00:32:55.861 }, 00:32:55.861 { 00:32:55.861 "subsystem": "vmd", 00:32:55.861 "config": [] 00:32:55.861 }, 00:32:55.861 { 00:32:55.861 "subsystem": "accel", 00:32:55.861 "config": [ 00:32:55.861 { 00:32:55.861 "method": "accel_set_options", 00:32:55.861 "params": { 00:32:55.861 "small_cache_size": 128, 00:32:55.861 "large_cache_size": 16, 00:32:55.861 "task_count": 2048, 00:32:55.861 "sequence_count": 2048, 00:32:55.861 "buf_count": 2048 00:32:55.861 } 00:32:55.861 } 00:32:55.861 ] 00:32:55.861 }, 00:32:55.861 { 00:32:55.861 "subsystem": "bdev", 00:32:55.861 "config": [ 00:32:55.861 { 00:32:55.861 "method": "bdev_set_options", 00:32:55.861 "params": { 00:32:55.861 "bdev_io_pool_size": 65535, 00:32:55.861 "bdev_io_cache_size": 256, 00:32:55.861 "bdev_auto_examine": true, 00:32:55.861 "iobuf_small_cache_size": 128, 00:32:55.861 "iobuf_large_cache_size": 16 00:32:55.861 } 00:32:55.861 }, 00:32:55.861 { 00:32:55.861 "method": "bdev_raid_set_options", 00:32:55.861 "params": { 00:32:55.861 "process_window_size_kb": 1024, 00:32:55.861 "process_max_bandwidth_mb_sec": 0 00:32:55.861 } 00:32:55.861 }, 00:32:55.861 { 00:32:55.861 "method": "bdev_iscsi_set_options", 00:32:55.861 "params": { 00:32:55.861 "timeout_sec": 30 00:32:55.861 } 00:32:55.861 }, 00:32:55.861 { 00:32:55.861 "method": "bdev_nvme_set_options", 00:32:55.861 "params": { 00:32:55.861 "action_on_timeout": "none", 00:32:55.861 "timeout_us": 0, 00:32:55.861 "timeout_admin_us": 0, 00:32:55.861 "keep_alive_timeout_ms": 10000, 00:32:55.861 "arbitration_burst": 0, 00:32:55.861 "low_priority_weight": 0, 00:32:55.861 "medium_priority_weight": 0, 00:32:55.861 "high_priority_weight": 0, 00:32:55.861 "nvme_adminq_poll_period_us": 10000, 00:32:55.861 "nvme_ioq_poll_period_us": 0, 00:32:55.861 "io_queue_requests": 512, 00:32:55.861 "delay_cmd_submit": true, 00:32:55.861 "transport_retry_count": 4, 00:32:55.861 "bdev_retry_count": 3, 00:32:55.861 "transport_ack_timeout": 0, 00:32:55.861 "ctrlr_loss_timeout_sec": 0, 00:32:55.861 "reconnect_delay_sec": 0, 00:32:55.861 "fast_io_fail_timeout_sec": 0, 00:32:55.861 "disable_auto_failback": false, 00:32:55.861 "generate_uuids": false, 00:32:55.861 "transport_tos": 0, 00:32:55.861 "nvme_error_stat": false, 00:32:55.861 "rdma_srq_size": 0, 00:32:55.861 "io_path_stat": false, 00:32:55.861 "allow_accel_sequence": false, 00:32:55.861 "rdma_max_cq_size": 0, 00:32:55.861 "rdma_cm_event_timeout_ms": 0, 00:32:55.861 "dhchap_digests": [ 00:32:55.861 "sha256", 00:32:55.861 "sha384", 00:32:55.861 "sha512" 00:32:55.861 ], 00:32:55.861 "dhchap_dhgroups": [ 00:32:55.861 "null", 00:32:55.861 "ffdhe2048", 00:32:55.861 "ffdhe3072", 00:32:55.861 "ffdhe4096", 00:32:55.861 "ffdhe6144", 00:32:55.861 "ffdhe8192" 00:32:55.861 ] 00:32:55.861 } 00:32:55.861 }, 00:32:55.861 { 00:32:55.861 "method": "bdev_nvme_attach_controller", 00:32:55.861 "params": { 00:32:55.861 "name": "nvme0", 00:32:55.861 "trtype": "TCP", 00:32:55.861 "adrfam": "IPv4", 00:32:55.861 "traddr": "127.0.0.1", 00:32:55.861 "trsvcid": "4420", 00:32:55.861 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:55.861 "prchk_reftag": false, 00:32:55.861 "prchk_guard": false, 00:32:55.861 "ctrlr_loss_timeout_sec": 0, 00:32:55.861 "reconnect_delay_sec": 0, 00:32:55.861 "fast_io_fail_timeout_sec": 0, 00:32:55.861 "psk": "key0", 00:32:55.861 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:55.861 "hdgst": false, 00:32:55.861 "ddgst": false, 00:32:55.861 "multipath": "multipath" 00:32:55.861 } 00:32:55.861 }, 00:32:55.861 { 00:32:55.861 "method": "bdev_nvme_set_hotplug", 00:32:55.861 "params": { 00:32:55.861 "period_us": 100000, 00:32:55.861 "enable": false 00:32:55.861 } 00:32:55.861 }, 00:32:55.861 { 00:32:55.861 "method": "bdev_wait_for_examine" 00:32:55.861 } 00:32:55.861 ] 00:32:55.861 }, 00:32:55.861 { 00:32:55.861 "subsystem": "nbd", 00:32:55.861 "config": [] 00:32:55.861 } 00:32:55.861 ] 00:32:55.861 }' 00:32:55.862 18:22:18 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:55.862 18:22:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:55.862 [2024-12-09 18:22:18.789356] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:32:55.862 [2024-12-09 18:22:18.789438] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662664 ] 00:32:55.862 [2024-12-09 18:22:18.859848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.122 [2024-12-09 18:22:18.920494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:56.122 [2024-12-09 18:22:19.111929] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:56.380 18:22:19 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:56.380 18:22:19 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:32:56.380 18:22:19 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:32:56.380 18:22:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:56.380 18:22:19 keyring_file -- keyring/file.sh@121 -- # jq length 00:32:56.639 18:22:19 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:56.639 18:22:19 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:32:56.639 18:22:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:56.639 18:22:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:56.639 18:22:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:56.639 18:22:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:56.639 18:22:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:56.897 18:22:19 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:32:56.897 18:22:19 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:32:56.897 18:22:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:56.897 18:22:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:56.897 18:22:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:56.897 18:22:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:56.897 18:22:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:57.155 18:22:20 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:32:57.155 18:22:20 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:32:57.155 18:22:20 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:32:57.155 18:22:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:57.414 18:22:20 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:32:57.414 18:22:20 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:57.414 18:22:20 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.4cGBGqYHOm /tmp/tmp.CTmaGPEMj8 00:32:57.414 18:22:20 keyring_file -- keyring/file.sh@20 -- # killprocess 1662664 00:32:57.414 18:22:20 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1662664 ']' 00:32:57.414 18:22:20 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1662664 00:32:57.414 18:22:20 keyring_file -- common/autotest_common.sh@959 -- # uname 00:32:57.414 18:22:20 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:57.414 18:22:20 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1662664 00:32:57.414 18:22:20 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:57.414 18:22:20 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:57.414 18:22:20 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1662664' 00:32:57.414 killing process with pid 1662664 00:32:57.414 18:22:20 keyring_file -- common/autotest_common.sh@973 -- # kill 1662664 00:32:57.414 Received shutdown signal, test time was about 1.000000 seconds 00:32:57.414 00:32:57.414 Latency(us) 00:32:57.414 [2024-12-09T17:22:20.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:57.414 [2024-12-09T17:22:20.455Z] =================================================================================================================== 00:32:57.414 [2024-12-09T17:22:20.455Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:57.414 18:22:20 keyring_file -- common/autotest_common.sh@978 -- # wait 1662664 00:32:57.673 18:22:20 keyring_file -- keyring/file.sh@21 -- # killprocess 1661165 00:32:57.673 18:22:20 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1661165 ']' 00:32:57.673 18:22:20 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1661165 00:32:57.673 18:22:20 keyring_file -- common/autotest_common.sh@959 -- # uname 00:32:57.673 18:22:20 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:57.673 18:22:20 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1661165 00:32:57.673 18:22:20 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:57.673 18:22:20 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:57.673 18:22:20 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1661165' 00:32:57.673 killing process with pid 1661165 00:32:57.673 18:22:20 keyring_file -- common/autotest_common.sh@973 -- # kill 1661165 00:32:57.673 18:22:20 keyring_file -- common/autotest_common.sh@978 -- # wait 1661165 00:32:58.240 00:32:58.240 real 0m14.716s 00:32:58.240 user 0m37.480s 00:32:58.240 sys 0m3.194s 00:32:58.240 18:22:21 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:58.240 18:22:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:58.240 ************************************ 00:32:58.240 END TEST keyring_file 00:32:58.240 ************************************ 00:32:58.240 18:22:21 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:32:58.240 18:22:21 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:58.240 18:22:21 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:58.240 18:22:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:58.240 18:22:21 -- common/autotest_common.sh@10 -- # set +x 00:32:58.240 ************************************ 00:32:58.240 START TEST keyring_linux 00:32:58.240 ************************************ 00:32:58.240 18:22:21 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:58.240 Joined session keyring: 51578044 00:32:58.240 * Looking for test storage... 00:32:58.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:58.240 18:22:21 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:58.240 18:22:21 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:32:58.240 18:22:21 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:58.240 18:22:21 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@345 -- # : 1 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:58.240 18:22:21 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:58.241 18:22:21 keyring_linux -- scripts/common.sh@368 -- # return 0 00:32:58.241 18:22:21 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:58.241 18:22:21 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:58.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.241 --rc genhtml_branch_coverage=1 00:32:58.241 --rc genhtml_function_coverage=1 00:32:58.241 --rc genhtml_legend=1 00:32:58.241 --rc geninfo_all_blocks=1 00:32:58.241 --rc geninfo_unexecuted_blocks=1 00:32:58.241 00:32:58.241 ' 00:32:58.241 18:22:21 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:58.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.241 --rc genhtml_branch_coverage=1 00:32:58.241 --rc genhtml_function_coverage=1 00:32:58.241 --rc genhtml_legend=1 00:32:58.241 --rc geninfo_all_blocks=1 00:32:58.241 --rc geninfo_unexecuted_blocks=1 00:32:58.241 00:32:58.241 ' 00:32:58.241 18:22:21 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:58.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.241 --rc genhtml_branch_coverage=1 00:32:58.241 --rc genhtml_function_coverage=1 00:32:58.241 --rc genhtml_legend=1 00:32:58.241 --rc geninfo_all_blocks=1 00:32:58.241 --rc geninfo_unexecuted_blocks=1 00:32:58.241 00:32:58.241 ' 00:32:58.241 18:22:21 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:58.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.241 --rc genhtml_branch_coverage=1 00:32:58.241 --rc genhtml_function_coverage=1 00:32:58.241 --rc genhtml_legend=1 00:32:58.241 --rc geninfo_all_blocks=1 00:32:58.241 --rc geninfo_unexecuted_blocks=1 00:32:58.241 00:32:58.241 ' 00:32:58.241 18:22:21 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:58.241 18:22:21 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:58.241 18:22:21 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:58.241 18:22:21 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:58.241 18:22:21 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:58.241 18:22:21 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:58.241 18:22:21 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:58.241 18:22:21 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:58.241 18:22:21 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:58.241 18:22:21 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:58.241 18:22:21 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:58.241 18:22:21 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:58.241 18:22:21 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:58.241 18:22:21 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:58.500 18:22:21 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:32:58.500 18:22:21 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:58.500 18:22:21 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:58.500 18:22:21 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:58.500 18:22:21 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.500 18:22:21 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.500 18:22:21 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.500 18:22:21 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:58.500 18:22:21 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:58.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:58.500 18:22:21 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:58.500 18:22:21 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:58.500 18:22:21 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:58.500 18:22:21 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:58.500 18:22:21 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:58.500 18:22:21 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:58.500 18:22:21 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:58.500 18:22:21 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:58.500 18:22:21 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:58.500 18:22:21 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:58.500 18:22:21 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:58.500 18:22:21 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:58.500 18:22:21 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@733 -- # python - 00:32:58.500 18:22:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:58.500 18:22:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:58.500 /tmp/:spdk-test:key0 00:32:58.500 18:22:21 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:58.500 18:22:21 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:58.500 18:22:21 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:58.500 18:22:21 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:58.500 18:22:21 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:58.500 18:22:21 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:58.500 18:22:21 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:32:58.500 18:22:21 keyring_linux -- nvmf/common.sh@733 -- # python - 00:32:58.500 18:22:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:58.500 18:22:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:58.500 /tmp/:spdk-test:key1 00:32:58.500 18:22:21 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1663127 00:32:58.500 18:22:21 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:58.500 18:22:21 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1663127 00:32:58.500 18:22:21 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1663127 ']' 00:32:58.500 18:22:21 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:58.500 18:22:21 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:58.500 18:22:21 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:58.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:58.500 18:22:21 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:58.500 18:22:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:58.500 [2024-12-09 18:22:21.431559] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:32:58.500 [2024-12-09 18:22:21.431646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663127 ] 00:32:58.500 [2024-12-09 18:22:21.499704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.759 [2024-12-09 18:22:21.557192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.018 18:22:21 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:59.018 18:22:21 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:32:59.018 18:22:21 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:59.018 18:22:21 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.018 18:22:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:59.018 [2024-12-09 18:22:21.826337] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:59.018 null0 00:32:59.018 [2024-12-09 18:22:21.858372] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:59.018 [2024-12-09 18:22:21.858910] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:59.018 18:22:21 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.018 18:22:21 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:59.018 1072343865 00:32:59.018 18:22:21 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:59.018 561685567 00:32:59.018 18:22:21 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1663143 00:32:59.018 18:22:21 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1663143 /var/tmp/bperf.sock 00:32:59.018 18:22:21 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:59.018 18:22:21 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1663143 ']' 00:32:59.018 18:22:21 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:59.018 18:22:21 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:59.018 18:22:21 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:59.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:59.018 18:22:21 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:59.018 18:22:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:59.018 [2024-12-09 18:22:21.927380] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization... 00:32:59.018 [2024-12-09 18:22:21.927470] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663143 ] 00:32:59.018 [2024-12-09 18:22:21.993133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.018 [2024-12-09 18:22:22.051270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:59.276 18:22:22 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:59.276 18:22:22 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:32:59.276 18:22:22 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:59.276 18:22:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:59.534 18:22:22 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:59.534 18:22:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:59.791 18:22:22 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:59.791 18:22:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:00.051 [2024-12-09 18:22:23.038127] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:00.355 nvme0n1 00:33:00.355 18:22:23 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:00.355 18:22:23 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:00.355 18:22:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:00.355 18:22:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:00.355 18:22:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:00.355 18:22:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:00.637 18:22:23 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:00.637 18:22:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:00.637 18:22:23 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:00.637 18:22:23 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:00.637 18:22:23 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:00.637 18:22:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:00.637 18:22:23 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:00.895 18:22:23 keyring_linux -- keyring/linux.sh@25 -- # sn=1072343865 00:33:00.895 18:22:23 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:00.895 18:22:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:00.895 18:22:23 keyring_linux -- keyring/linux.sh@26 -- # [[ 1072343865 == \1\0\7\2\3\4\3\8\6\5 ]] 00:33:00.895 18:22:23 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1072343865 00:33:00.895 18:22:23 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:00.895 18:22:23 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:00.895 Running I/O for 1 seconds... 00:33:01.828 11453.00 IOPS, 44.74 MiB/s 00:33:01.828 Latency(us) 00:33:01.828 [2024-12-09T17:22:24.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.828 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:01.828 nvme0n1 : 1.01 11462.45 44.78 0.00 0.00 11101.32 6893.42 17961.72 00:33:01.828 [2024-12-09T17:22:24.869Z] =================================================================================================================== 00:33:01.828 [2024-12-09T17:22:24.869Z] Total : 11462.45 44.78 0.00 0.00 11101.32 6893.42 17961.72 00:33:01.828 { 00:33:01.828 "results": [ 00:33:01.828 { 00:33:01.828 "job": "nvme0n1", 00:33:01.828 "core_mask": "0x2", 00:33:01.828 "workload": "randread", 00:33:01.828 "status": "finished", 00:33:01.828 "queue_depth": 128, 00:33:01.828 "io_size": 4096, 00:33:01.828 "runtime": 1.01043, 00:33:01.828 "iops": 11462.44668111596, 00:33:01.828 "mibps": 44.77518234810922, 00:33:01.828 "io_failed": 0, 00:33:01.828 "io_timeout": 0, 00:33:01.828 "avg_latency_us": 11101.320241242798, 00:33:01.828 "min_latency_us": 6893.416296296296, 00:33:01.828 "max_latency_us": 17961.71851851852 00:33:01.828 } 00:33:01.828 ], 00:33:01.828 "core_count": 1 00:33:01.828 } 00:33:01.828 18:22:24 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:01.828 18:22:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:02.086 18:22:25 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:02.086 18:22:25 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:02.086 18:22:25 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:02.086 18:22:25 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:02.086 18:22:25 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:02.086 18:22:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:02.343 18:22:25 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:02.343 18:22:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:02.343 18:22:25 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:02.343 18:22:25 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:02.343 18:22:25 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:33:02.343 18:22:25 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:02.343 18:22:25 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:02.343 18:22:25 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:02.343 18:22:25 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:02.343 18:22:25 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:02.343 18:22:25 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:02.343 18:22:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:02.600 [2024-12-09 18:22:25.622329] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:02.600 [2024-12-09 18:22:25.623302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8df20 (107): Transport endpoint is not connected 00:33:02.600 [2024-12-09 18:22:25.624295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8df20 (9): Bad file descriptor 00:33:02.601 [2024-12-09 18:22:25.625294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:33:02.601 [2024-12-09 18:22:25.625315] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:02.601 [2024-12-09 18:22:25.625329] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:33:02.601 [2024-12-09 18:22:25.625343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:33:02.601 request: 00:33:02.601 { 00:33:02.601 "name": "nvme0", 00:33:02.601 "trtype": "tcp", 00:33:02.601 "traddr": "127.0.0.1", 00:33:02.601 "adrfam": "ipv4", 00:33:02.601 "trsvcid": "4420", 00:33:02.601 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:02.601 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:02.601 "prchk_reftag": false, 00:33:02.601 "prchk_guard": false, 00:33:02.601 "hdgst": false, 00:33:02.601 "ddgst": false, 00:33:02.601 "psk": ":spdk-test:key1", 00:33:02.601 "allow_unrecognized_csi": false, 00:33:02.601 "method": "bdev_nvme_attach_controller", 00:33:02.601 "req_id": 1 00:33:02.601 } 00:33:02.601 Got JSON-RPC error response 00:33:02.601 response: 00:33:02.601 { 00:33:02.601 "code": -5, 00:33:02.601 "message": "Input/output error" 00:33:02.601 } 00:33:02.861 18:22:25 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:33:02.861 18:22:25 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:02.861 18:22:25 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:02.861 18:22:25 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:02.861 18:22:25 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:02.861 18:22:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:02.861 18:22:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:02.861 18:22:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:02.861 18:22:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:02.861 18:22:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:02.861 18:22:25 keyring_linux -- keyring/linux.sh@33 -- # sn=1072343865 00:33:02.861 18:22:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1072343865 00:33:02.861 1 links removed 00:33:02.861 18:22:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:02.861 18:22:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:02.861 18:22:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:02.861 18:22:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:02.861 18:22:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:02.861 18:22:25 keyring_linux -- keyring/linux.sh@33 -- # sn=561685567 00:33:02.861 18:22:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 561685567 00:33:02.861 1 links removed 00:33:02.861 18:22:25 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1663143 00:33:02.861 18:22:25 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1663143 ']' 00:33:02.861 18:22:25 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1663143 00:33:02.861 18:22:25 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:33:02.861 18:22:25 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:02.861 18:22:25 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1663143 00:33:02.861 18:22:25 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:02.861 18:22:25 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:02.861 18:22:25 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1663143' 00:33:02.861 killing process with pid 1663143 00:33:02.861 18:22:25 keyring_linux -- common/autotest_common.sh@973 -- # kill 1663143 00:33:02.861 Received shutdown signal, test time was about 1.000000 seconds 00:33:02.861 00:33:02.861 Latency(us) 00:33:02.861 [2024-12-09T17:22:25.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.861 [2024-12-09T17:22:25.902Z] =================================================================================================================== 00:33:02.861 [2024-12-09T17:22:25.902Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:02.861 18:22:25 keyring_linux -- common/autotest_common.sh@978 -- # wait 1663143 00:33:03.121 18:22:25 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1663127 00:33:03.121 18:22:25 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1663127 ']' 00:33:03.121 18:22:25 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1663127 00:33:03.121 18:22:25 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:33:03.121 18:22:25 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:03.121 18:22:25 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1663127 00:33:03.121 18:22:25 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:03.121 18:22:25 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:03.121 18:22:25 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1663127' 00:33:03.121 killing process with pid 1663127 00:33:03.121 18:22:25 keyring_linux -- common/autotest_common.sh@973 -- # kill 1663127 00:33:03.121 18:22:25 keyring_linux -- common/autotest_common.sh@978 -- # wait 1663127 00:33:03.380 00:33:03.380 real 0m5.258s 00:33:03.380 user 0m10.453s 00:33:03.380 sys 0m1.580s 00:33:03.380 18:22:26 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:03.380 18:22:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:03.380 ************************************ 00:33:03.380 END TEST keyring_linux 00:33:03.380 ************************************ 00:33:03.380 18:22:26 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:33:03.380 18:22:26 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:33:03.380 18:22:26 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:33:03.380 18:22:26 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:33:03.380 18:22:26 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:33:03.380 18:22:26 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:33:03.380 18:22:26 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:03.380 18:22:26 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:03.380 18:22:26 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:03.380 18:22:26 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:03.380 18:22:26 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:03.380 18:22:26 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:03.380 18:22:26 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:03.380 18:22:26 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:03.380 18:22:26 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:33:03.380 18:22:26 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:33:03.380 18:22:26 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:33:03.380 18:22:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:03.380 18:22:26 -- common/autotest_common.sh@10 -- # set +x 00:33:03.380 18:22:26 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:33:03.380 18:22:26 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:33:03.380 18:22:26 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:33:03.380 18:22:26 -- common/autotest_common.sh@10 -- # set +x 00:33:05.283 INFO: APP EXITING 00:33:05.283 INFO: killing all VMs 00:33:05.283 INFO: killing vhost app 00:33:05.283 INFO: EXIT DONE 00:33:06.657 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:33:06.657 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:33:06.657 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:33:06.657 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:33:06.657 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:33:06.657 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:33:06.657 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:33:06.657 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:33:06.657 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:33:06.657 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:33:06.657 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:33:06.657 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:33:06.657 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:33:06.657 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:33:06.915 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:33:06.916 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:33:06.916 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:33:08.292 Cleaning 00:33:08.292 Removing: /var/run/dpdk/spdk0/config 00:33:08.292 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:08.292 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:08.292 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:08.292 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:08.292 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:08.292 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:08.292 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:08.292 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:08.292 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:08.293 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:08.293 Removing: /var/run/dpdk/spdk1/config 00:33:08.293 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:08.293 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:08.293 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:08.293 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:08.293 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:08.293 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:08.293 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:08.293 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:08.293 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:08.293 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:08.293 Removing: /var/run/dpdk/spdk2/config 00:33:08.293 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:08.293 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:08.293 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:08.293 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:08.293 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:08.293 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:08.293 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:08.293 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:08.293 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:08.293 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:08.293 Removing: /var/run/dpdk/spdk3/config 00:33:08.293 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:08.293 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:08.293 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:08.293 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:08.293 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:08.293 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:08.293 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:08.293 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:08.293 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:08.293 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:08.293 Removing: /var/run/dpdk/spdk4/config 00:33:08.293 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:08.293 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:08.293 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:08.293 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:08.293 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:08.293 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:08.293 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:08.293 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:08.293 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:08.293 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:08.293 Removing: /dev/shm/bdev_svc_trace.1 00:33:08.293 Removing: /dev/shm/nvmf_trace.0 00:33:08.293 Removing: /dev/shm/spdk_tgt_trace.pid1341209 00:33:08.293 Removing: /var/run/dpdk/spdk0 00:33:08.293 Removing: /var/run/dpdk/spdk1 00:33:08.293 Removing: /var/run/dpdk/spdk2 00:33:08.293 Removing: /var/run/dpdk/spdk3 00:33:08.293 Removing: /var/run/dpdk/spdk4 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1339529 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1340272 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1341209 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1341554 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1342235 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1342375 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1343088 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1343219 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1343495 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1344708 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1345625 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1345941 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1346140 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1346357 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1346668 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1346825 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1346977 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1347212 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1347598 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1350590 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1350762 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1350924 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1350928 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1351354 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1351362 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1351788 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1351804 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1352086 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1352099 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1352266 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1352390 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1352772 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1352928 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1353149 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1355364 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1358008 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1365138 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1365540 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1368068 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1368233 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1370868 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1374596 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1376786 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1383818 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1389062 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1390379 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1391056 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1401447 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1403734 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1431626 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1434934 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1438767 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1443043 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1443164 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1443702 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1444358 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1445017 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1445414 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1445422 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1445565 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1445700 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1445706 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1446356 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1447009 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1447563 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1448058 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1448085 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1448226 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1449235 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1449965 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1455824 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1483921 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1486845 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1487958 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1489273 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1489380 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1489538 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1489668 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1490224 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1491541 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1492281 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1492709 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1494322 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1494633 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1495186 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1497701 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1501016 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1501017 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1501018 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1503235 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1508200 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1511477 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1515254 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1516215 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1517290 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1518384 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1521156 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1523739 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1526055 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1530350 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1530358 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1533144 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1533277 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1533526 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1533798 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1533808 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1536575 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1536918 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1539583 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1541557 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1545168 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1549055 00:33:08.293 Removing: /var/run/dpdk/spdk_pid1555548 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1559908 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1559950 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1572305 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1572827 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1573237 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1573647 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1574228 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1574758 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1575168 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1575574 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1578080 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1578222 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1582758 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1582818 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1586185 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1588791 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1595719 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1596116 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1598625 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1598783 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1601405 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1605103 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1607175 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1613520 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1619358 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1620646 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1621328 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1631505 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1633754 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1635769 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1640813 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1640818 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1643723 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1645126 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1646531 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1647388 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1648909 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1650294 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1655770 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1656092 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1656484 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1658038 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1658437 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1658728 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1661165 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1661178 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1662664 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1663127 00:33:08.551 Removing: /var/run/dpdk/spdk_pid1663143 00:33:08.551 Clean 00:33:08.551 18:22:31 -- common/autotest_common.sh@1453 -- # return 0 00:33:08.551 18:22:31 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:33:08.551 18:22:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:08.551 18:22:31 -- common/autotest_common.sh@10 -- # set +x 00:33:08.551 18:22:31 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:33:08.551 18:22:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:08.551 18:22:31 -- common/autotest_common.sh@10 -- # set +x 00:33:08.551 18:22:31 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:08.551 18:22:31 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:08.551 18:22:31 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:08.551 18:22:31 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:33:08.551 18:22:31 -- spdk/autotest.sh@398 -- # hostname 00:33:08.551 18:22:31 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:08.809 geninfo: WARNING: invalid characters removed from testname! 00:33:40.870 18:23:02 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:43.399 18:23:06 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:46.678 18:23:09 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:49.958 18:23:12 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:52.482 18:23:15 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:55.810 18:23:18 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:58.350 18:23:21 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:58.350 18:23:21 -- spdk/autorun.sh@1 -- $ timing_finish 00:33:58.350 18:23:21 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:33:58.350 18:23:21 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:58.350 18:23:21 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:58.350 18:23:21 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:58.609 + [[ -n 1267696 ]] 00:33:58.609 + sudo kill 1267696 00:33:58.620 [Pipeline] } 00:33:58.634 [Pipeline] // stage 00:33:58.640 [Pipeline] } 00:33:58.654 [Pipeline] // timeout 00:33:58.659 [Pipeline] } 00:33:58.673 [Pipeline] // catchError 00:33:58.678 [Pipeline] } 00:33:58.693 [Pipeline] // wrap 00:33:58.699 [Pipeline] } 00:33:58.711 [Pipeline] // catchError 00:33:58.720 [Pipeline] stage 00:33:58.722 [Pipeline] { (Epilogue) 00:33:58.735 [Pipeline] catchError 00:33:58.737 [Pipeline] { 00:33:58.749 [Pipeline] echo 00:33:58.751 Cleanup processes 00:33:58.758 [Pipeline] sh 00:33:59.076 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:59.076 1673818 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:59.115 [Pipeline] sh 00:33:59.401 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:59.401 ++ grep -v 'sudo pgrep' 00:33:59.401 ++ awk '{print $1}' 00:33:59.401 + sudo kill -9 00:33:59.401 + true 00:33:59.414 [Pipeline] sh 00:33:59.699 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:09.691 [Pipeline] sh 00:34:09.975 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:09.975 Artifacts sizes are good 00:34:09.991 [Pipeline] archiveArtifacts 00:34:09.998 Archiving artifacts 00:34:10.131 [Pipeline] sh 00:34:10.416 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:10.431 [Pipeline] cleanWs 00:34:10.441 [WS-CLEANUP] Deleting project workspace... 00:34:10.441 [WS-CLEANUP] Deferred wipeout is used... 00:34:10.448 [WS-CLEANUP] done 00:34:10.450 [Pipeline] } 00:34:10.467 [Pipeline] // catchError 00:34:10.478 [Pipeline] sh 00:34:10.760 + logger -p user.info -t JENKINS-CI 00:34:10.769 [Pipeline] } 00:34:10.782 [Pipeline] // stage 00:34:10.787 [Pipeline] } 00:34:10.801 [Pipeline] // node 00:34:10.807 [Pipeline] End of Pipeline 00:34:10.844 Finished: SUCCESS